dlab IRC 2017 November 7th

← Previous day: 2017-11-06 · Next day: 2017-11-08

#dlab logs for 2017-11-07

Current topic
A distributed laboratory. Everything which appears here is logged at https://rawl.es/dlab/irc/2017/
rawles (Simon Rawles)
[16:39:01Z] Has anyone in here tried out Coq? (https://coq.inria.fr/) I know adamfc was talking about it (https://rawl.es/dlab/irc/2017/10/22/#t-183737) but I forgot to ask whether he'd used it. I'm reading a bit about inference in first-order logic and might give it a try soon.
adamfc (Adam Forsythe-Cheasley)
[16:40:24Z] oh excellent
[16:40:29Z] no, I've not actually used it
[16:40:37Z] I'd been interested to hear how you get on
rawles
[16:40:53Z] I wonder if we can somehow do a little project using it? I don't know enough about it to suggest what that might be, though.
adamfc
[16:41:47Z] personally, I was going to try encoding some simple proof
[16:42:08Z] maybe something like pythagoras
[16:42:14Z] and get it to check the proof
[16:42:19Z] just to see how it works
rawles
[16:43:26Z] Sounds like a good first step. We could go through this sometime: https://www.di.ens.fr/~zappa/teaching/coq/ecole11/summer/lectures/lec1.pdf
adamfc
[16:44:19Z] sounds good
rawles
[16:44:35Z] Maybe we could start next week? I'm quite busy this week.
adamfc
[16:45:11Z] me too
rawles
[16:57:45Z] It would be fun to get together and try to explore it a bit, but I know that's not very easy practically (for example, joeytwiddle is in Asia).
[16:58:27Z] A few months ago I attended a Google Hangout where we used screen-sharing to all build up a model (actually a 'construal') together.
[16:59:58Z] It might work in this case, I don't know.
[17:00:19Z] supertime: Are you interested?
[17:10:56Z] joeytwiddle: Are you interested?
[17:25:51Z] I also have a question for supertime. Has anyone tried to learn Turing machines with LCS/XCS to learn (near-)minimal machines that output some string? I was reading about Kolmogorov complexity again yesterday and thought it might be fun.
supertime (Tim Kovács)
[17:44:03Z] I don’t think so. Will Browne would know and his FSM learning is probably the closest to that. Probably it has been done with GAs rather than LCS. But maybe they’re too expressive to be efficiently learnable. Maybe something like FSMs are a good compromise. I remember a long time ago someone tried to evolve C++ code, which is obviously very expressive, but evolution just couldn’t make anything useful with it. On the other hand, using a
[17:44:04Z] grammar to constrain programs seems to work very well. So maybe if you use a grammar to constrain Turing machines it could work.
rawles
[17:47:05Z] I suppose FSM learning would be more interesting because you'd end up with substructures you can pick out and see how well or badly the algorithm is doing. Would the grammar constrain the program to enforce that sort of substructure?
[17:47:34Z] "A FSM has the same computational power as a Turing machine that is restricted such that its head may only perform "read" operations, and always has to move from left to right. That is, each formal language accepted by a finite state machine is accepted by such a kind of restricted Turing machine, and vice versa."
[17:47:56Z] (https://xlinux.nist.gov/dads/HTML/finiteStateMachine.html)
[17:48:51Z] I suppose you can write a Turing machine as an FSM - https://en.wikipedia.org/wiki/Turing_machine#/media/File:State_diagram_3_state_busy_beaver_2B.svg
supertime
[17:58:50Z] Yes a grammar can constrain the structure
rawles
[18:09:05Z] If we replaced the Turing machine in the definition of Kolmogorov complexity with that grammar-constrained FSM, some of the output strings might be much more complex, and others far simpler. It would be interesting (though very expensive indeed) to try to characterise those. I guess the interesting thing would be making (near-)optimal programs for some (class of) output, after you add some grammar-specified c
[18:09:11Z] onstraint or some new regex/FSM feature.
[18:09:49Z] I'm not sure of the problem that's being solved here though :)
[23:48:12Z] A good outcome would be a representation/language/machine with which the GA/XCS can learn easily, and captures regularities in the output analogous to the Kolmogorov complexity, so that the program is a good compression for some output string considered complex. I know the Kolmogorov complexity is used as a kind of representation-agnostic idea of the complexity of a string, but it might also be useful as a so
[23:48:18Z] rt of baseline.
[23:49:16Z] How this relates to open-ended learning, I'm not sure. Maybe it's a hueristic for good ways to do abstractions, maybe with those constraining grammars.

All timestamps are in UTC. If you want to join in, channel information is available. This page is generated from raw logfile dlab-2017-10-31-134844.txt.