This is the mail archive of the guile@cygnus.com mailing list for the guile project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
"ccf::satchell"@hermes.dra.hmg.gb writes: > Here is the smallest file I have yet got a leak on; neither networking > nor threads seem to be essential. It looks like the problem could be > with closures? [...] > (do ((i (make-vector 10000 0.0) > (let* ((v (make-vector 10000 0.0)) > ( a (lambda (x) (+ x 2)))) > ;;; if you put an IMP into the vector there is no problem. > ; (vector-set! v 9997 2) > ;; But a closure is lethal! > (vector-set! v 9998 a) > ; This gc does not fix things. It just runs slower > ; (gc) > v > ) > )) > (#f) > ;;; Neither does this one > ; (gc) > ) Yes, the problem is with closures. The garbage collector isn't sensitive to the fact that your lambda closure doesn't refer to any variables in its "surrounding" environment. What happens is this: 1. An environment E1 is created where i is bound to vector V1 v is bound to vector V2 2. The lambda closure L1 closes over E1. 3. L1 is stored into V2 When the next turn of the loop starts: 4. An environment E2 is created where i is bound to vector V2 (returned by the step expression) v is bound to vector V3 5. The lambda closure L2 closes over E2. and so on... Note how you create a spiral of references through memory: When L2 is marked, the GC will also mark its environment E2. E2 refers to V2. V2 contains L1. L1 refers to E1. What we see is how the illusion of infinite memory breaks down. I don't know how many scheme interpreters handle this case. SCM and Guile doesn't. One way to solve it would be to make sure that the GC only marks variables reference from within the closure. But this requires pre-compilation of the closure, which doesn't fit very well with the current fast memoization scheme of evaluation in SCM and Guile. I see no way to solve this problem which is both easy and acceptable, so I won't do anything about it now. I've earlier argued that we should move from lazy memoization to pre-compilation because that would make it easier to introduce an efficient syntax-case macro system, and it would make some optimizations of the evaluator possible. It remains to be seen when somebody has time to do this though... /mdj