What about the throughput cost? Aren't you pushing out and pushing back up the entire weight of the db when the live index is sent out or the storage is pulled when its needed for a query?
this talk is new enough to be encoded in WebM and viewable without Flash. Does it have ad's or something that prohibits youtube from making it available this way?
The perceivers would be doing queries in which language? And where is the semantics, or how perceivers would now the meaning of the data without having a clear data model. Because besides the worry with time which I completely agree there is no data model as such been proposed and an architecture and computational processes in a physical system without a formal system backing it.
Hey apeiro, Had three other people try and they had same problem as me. It may be that you have h.264 support in your browser, whereas I only have WebM. In fact some videos that used to work for me have stopped, yet other continue to work. Maybe this is just a temporary problem going on behind the scenes.
I was thinking it would be useful to be able to map a file into memory, and construct a value by appending records to the file. Because there is an impedance mismatch for functional code, when it takes time to read and write values to a file. It would be useful to manipulate a value directly in the file.
Ask the designers of storage solutions to provide drivers working through transactions with persistent data on the system level, and you will get that :) But thats another layer.
Wonderful stuff. It worries me however that you can't, in general, get serialisable isolation level with this approach. If you added that, you'd lose the ability to read-cache to the same extent, because you need read-locks that apply to every transaction that wishes to add any novelty. But it worries me even more that Rich doesn't mention this problem.
As far as I understand, Datomic has serializable isolation level. Moreover, it is not just serializable, it is actually serialized, because all transactions are processed sequentially by one thread of control in transactor.
Sergey Smyshlyaev I haven't dug deep into Datomic, but I don't think it can prevent phantom reads, since it lacks the kind of locks that would make that possible. Serialising the individual updates that make up compound transactions does not serialise the transactions; phantoms can still occur. Most applications are fine with repeatable reads anyhow, and most DBMS default to that b/c serialisable is too costly. But maybe I've missed something...
Clifford Heath Datomic reads are consistent, in that when you do a read you must explicitly select the point in time that you want your read to happen. At any time, you can query against any time-version of the database. A database "value" - that is, the database at a specific point in time that you are reading against - is always as of a specific transaction, and never has any data from the "middle" of a transaction. About read caching: You're thinking about "updates" in terms of "this thing has changed and the way it used to be is gone" which is not a good fit for change in Datomic. What would be more accurate, is that when do a read at some point after point T, you see "this thing has this value as of T", and if there's some changes a T2, a read after that point can see "this thing has this value of as of T, it no longer has this value as of T2, it has a this other value as of T2". So for the purpose of caching, you never have to invalidate "this thing has this value as of T" because it always remains true.
Michael Gaare No Michael, I did not misunderstand updates. The simple fact is that with Datomic, if I want to look at the database value and then decide how to make changes, I cannot prevent another application supervening and inserting data that would have changed my decision. That means I cannot serialise transactions.
The idea of facts that might be true or have been true just sounds like a logical predicate that has a time stamp as one of its variables. It does not change in anything the logic of the relational data model (at least as a theory). The definition of information model to me is a bit floppy. I nice reference here would be the data based definition of information as semantic content by Floriano Floridi. Data is the lack of difference in the perceivable/perceived world. So, if you think as time as unidirectional space you can think as temporal changes as data as much as data of differences not changing over the temporal dimension.
Criticism first. It seems that a bunch of magic is still assumed in terms of storage CAP which just pushes the magic further down but doesn't eliminate it. Which makes me feel skeptical about the rest as if this is being shrugged off then how can I trust there wasn't some other blindspots? Database as a value? Yes, SQL tables are relational variables storing relations which are sets of tuples of the same type. I've been building temporal databases myself on top of SQL and it's not that pretty due to the extra work involved because the tools required aren't really always there. Also there's the whole SQL vs relational tension because like for example Chris Date says they aren't the same and that is pretty obvious when one looks at how SQL for performance reasons deviates from relational theory. According to Brian Beckman there's two traditions in programming languages of which one focuses on performance and thus starts from the raw hardware and adds abstraction to the point it feels like it can while the other one starts from math and reduces abstraction. The first is represented by FORTRAN, C and others while the other by LISP, Prolog, relational, etc. I'm not sure which one SQL is here. Perhaps the latter but the way SQL works compared to purely relational is very much tied to the hardware and efficiency aspect as Rich is pointing out. But the same ideas have been floating around elsewhere too like Rich says. Append only "databases" that are like logs which is very much what this sounds like. This one seems slightly superior though due to the audit capabilities baked directly into the model. I like the idea of reifying transactions to make the changes traceable. Good talk but like others have said there wasn't that much consideration of any alternate points of view.
Rich's talks are always great. I would love to hear him debate someone knowledgeable with an opposing view.
There is nobody with an opposing view, because his view is the correct solution and everyone thinking about it and using it for a while knows it :)
What about the throughput cost? Aren't you pushing out and pushing back up the entire weight of the db when the live index is sent out or the storage is pulled when its needed for a query?
I listened to this entire talk replacing the notion of a database with Web API. It still makes sense!
Yeah, that is why there is PouchDB and such. Master-master replication to clients... wooohooo
@@YuriyBrazhnyk PouchDB is great but I don't think it has the same notion of time.
Does anyone know an libre/open source DBMS implementing this architecture?
There is CRUX DB with a similar architecture (and even based on Clojure too)
Late to the party, but check out Datahike!
this talk is new enough to be encoded in WebM and viewable without Flash. Does it have ad's or something that prohibits youtube from making it available this way?
I panicked at 0:21:37 because of the dirt marks on the screen. !!!!
The perceivers would be doing queries in which language? And where is the semantics, or how perceivers would now the meaning of the data without having a clear data model. Because besides the worry with time which I completely agree there is no data model as such been proposed and an architecture and computational processes in a physical system without a formal system backing it.
Hey apeiro, Had three other people try and they had same problem as me. It may be that you have h.264 support in your browser, whereas I only have WebM. In fact some videos that used to work for me have stopped, yet other continue to work. Maybe this is just a temporary problem going on behind the scenes.
I was thinking it would be useful to be able to map a file into memory, and construct a value by appending records to the file. Because there is an impedance mismatch for functional code, when it takes time to read and write values to a file. It would be useful to manipulate a value directly in the file.
Ask the designers of storage solutions to provide drivers working through transactions with persistent data on the system level, and you will get that :) But thats another layer.
.Net's LINQ goes a long way to injecting high-order querying into OO languages. It's nice but the mechanics are very complex.
Wonderful stuff. It worries me however that you can't, in general, get serialisable isolation level with this approach. If you added that, you'd lose the ability to read-cache to the same extent, because you need read-locks that apply to every transaction that wishes to add any novelty. But it worries me even more that Rich doesn't mention this problem.
/bookmarking this to watch some day
As far as I understand, Datomic has serializable isolation level. Moreover, it is not just serializable, it is actually serialized, because all transactions are processed sequentially by one thread of control in transactor.
Sergey Smyshlyaev I haven't dug deep into Datomic, but I don't think it can prevent phantom reads, since it lacks the kind of locks that would make that possible. Serialising the individual updates that make up compound transactions does not serialise the transactions; phantoms can still occur. Most applications are fine with repeatable reads anyhow, and most DBMS default to that b/c serialisable is too costly. But maybe I've missed something...
Clifford Heath Datomic reads are consistent, in that when you do a read you must explicitly select the point in time that you want your read to happen. At any time, you can query against any time-version of the database. A database "value" - that is, the database at a specific point in time that you are reading against - is always as of a specific transaction, and never has any data from the "middle" of a transaction.
About read caching: You're thinking about "updates" in terms of "this thing has changed and the way it used to be is gone" which is not a good fit for change in Datomic. What would be more accurate, is that when do a read at some point after point T, you see "this thing has this value as of T", and if there's some changes a T2, a read after that point can see "this thing has this value of as of T, it no longer has this value as of T2, it has a this other value as of T2". So for the purpose of caching, you never have to invalidate "this thing has this value as of T" because it always remains true.
Michael Gaare No Michael, I did not misunderstand updates. The simple fact is that with Datomic, if I want to look at the database value and then decide how to make changes, I cannot prevent another application supervening and inserting data that would have changed my decision. That means I cannot serialise transactions.
I believe it is viewable without flash (I'm viewing it w/o flash right now).
Do you have HTML5-mode enabled?
The idea of facts that might be true or have been true just sounds like a logical predicate that has a time stamp as one of its variables. It does not change in anything the logic of the relational data model (at least as a theory). The definition of information model to me is a bit floppy. I nice reference here would be the data based definition of information as semantic content by Floriano Floridi. Data is the lack of difference in the perceivable/perceived world. So, if you think as time as unidirectional space you can think as temporal changes as data as much as data of differences not changing over the temporal dimension.
Another great talk!
Stuff like this works only if no time dilation occurs which might be OK for you if you don't want to leave this miserable locality called Earth :)
ZODB for the win!
Criticism first. It seems that a bunch of magic is still assumed in terms of storage CAP which just pushes the magic further down but doesn't eliminate it. Which makes me feel skeptical about the rest as if this is being shrugged off then how can I trust there wasn't some other blindspots?
Database as a value? Yes, SQL tables are relational variables storing relations which are sets of tuples of the same type.
I've been building temporal databases myself on top of SQL and it's not that pretty due to the extra work involved because the tools required aren't really always there.
Also there's the whole SQL vs relational tension because like for example Chris Date says they aren't the same and that is pretty obvious when one looks at how SQL for performance reasons deviates from relational theory. According to Brian Beckman there's two traditions in programming languages of which one focuses on performance and thus starts from the raw hardware and adds abstraction to the point it feels like it can while the other one starts from math and reduces abstraction. The first is represented by FORTRAN, C and others while the other by LISP, Prolog, relational, etc. I'm not sure which one SQL is here. Perhaps the latter but the way SQL works compared to purely relational is very much tied to the hardware and efficiency aspect as Rich is pointing out. But the same ideas have been floating around elsewhere too like Rich says.
Append only "databases" that are like logs which is very much what this sounds like. This one seems slightly superior though due to the audit capabilities baked directly into the model. I like the idea of reifying transactions to make the changes traceable.
Good talk but like others have said there wasn't that much consideration of any alternate points of view.
58:50 Aristotelian Form | Perceiver 1:01:30 :O :O :O 1:03:44 LMAO
Database system as desribed by Rich is going to be 10x slower than traditional RDBMS... And probably take 10x more space on the disk as well.
dude the first ten minutes of this video talks about application data caching and the other side comlpaining about ui programmers