(the slides contains a lot of materials)
David Mandelin (Mozilla)
- Front End
- Interpreter (run the bytecode)
- DOM, Standard Lib, Garbage collector
JS is untyped and makes it slower. A simple thing such as z =x + y can mean many things. In addition there are internal types. Some object have different shapes. JS needs boxed values. So it has two values, the box and the unboxed.
For interpreting code, JS goes through a lot of steps.
JS gets a lot faster by the addition JIT. In the future there will be type-specializing JIT compiler. Many steps are removed with the JIT. The JIT code can keep things in register.
There is a separate JIT for Regex. The performances have improved a lot recently.
ICs: a mini-JIT for objects called Inline Caching. It will improve a lot the properties. Global Variable acces, direct property access, closure variable access are improved by IC. Prototypes don’t create too much performance issues because of the IC. (Shape of new C objects determines prototype).
Some shapes slow down ICs. For example, when passing a lot of shapes, except on Opera. On Opera shapes doesn’t matter.
Properties in the slow zone (but there are variations across browsers): DOM access, Undefined properties, Scripted getter, Scripted Setter.
Type-Specializing JIT: Tracemonkey in Firefox 3.5+ and Crankshaft in Chrome. If JS had type declarations that would go faster. type-specializing JIT. JS will not get them quick. So a solution came up. run the program for a bit and monitor types, and then recompile optimized for those types. They optimize only the hot code because it is costly. tracemonkey does 70 iterations, crankshaft according to a profile.
Current limitations: what is happening if your type changes after the compilation. Just a few changes, no issues. A lot, the engine gives up and deoptimizes to basic JIT.
Type Inference for JITs
This is pretty much research and not deployed in any browsers. Basically trying to get rid of last few instances of boxing. Parsing the code you can understand what will be the types across the code.
Objects, arrays, strings, function objects, closure environments allocate memory. GC pauses your program. When GS runs, JS don’t. Basic GC algo is a mark and sweep by traversing all reachable objects and recycle objects that are not reachable. For safe GC, the JS is paused. Sometimes for 100ms. Which is long. For animation it creates deep issues.
- Generational GC: Optimize for creating many short-lived objects. Create objects in a frequently collected nursery area and promote long-lived objects to a rarely collected tenured area. Only chrome seems to do that. Overall, it creates fewer pauses. Basically you keep what is important and is likely to stay around.
- Incremental GC: Do a little bit of GC traversal at a time. (research @ mozilla)
tiny changes might affect performances.
- Strings sometimes faster than you think
- Arrays. important to dense array with contiguous indexes. 3-15x slower with sparse array
- Iteration over arrays
- Functions they use ICs
- Creating objects (constructor on chrome are fast)
- OOP Styling
- eval and with
Top 5 things to know.
- Avoid eval, with, exceptions
- avoid creating objects in hot loops
- Use dense arrays
- write type-stable code.
- Talk to us
JS engine devs want to help you.