Its goal is to give a better sense on how much computations are going on in Mozilla automation.
Current TreeHerder UI surfaces job durations, but only per job. To get a sense on how much we stress
our automation, we have to click on each individual job and do the sum manually.
This tool is doing this sum for you.
Well, it also tries to rank the jobs by their durations. I would like to open minds about the possible impact on the environment we may have here.
For that, I am translating these durations into something fun that doesn’t necessarily make any sense.
What is that car’s GIF?
The car is a Trabant. This car is often seen as symbolic of the former East Germany and the collapse of the Eastern Bloc in general. This part of the tool is just a joke. You may only consider looking at durations, which are meant to be trustable data. Translating a worker duration into CO2 emission is almost impossible to get right. And that’s what I do here: Translate worker duration into a potential energy consumption, which I translate into a potential CO2 emission, before finally translating that CO2 emission into the equivalent emission of a trabant over a given distance in kilometers.
Power consumption of an AWS worker per hour
Here is a really weak computation of Amazon AWS CO2 emissions for a t4.large worker.
The power usage of the machines these workers are running on could be 0.6 kW.
Such worker uses 25% of these machines.
Then let’s say that Amazon Power Usage Effectiveness is 1.1.
It means that one hour of a worker consumes 0.165 kWh (0.6 * 0.25 * 1.1).
CO2 emission of electricity per kWh
Based on US Environmental Protection Agency (source), the average CO2 emission per MWh is 998.4 lb/MWh.
So 998.4 * 453.59237(g/lb) = 452866 g/MWh, and, 452866 / 1000 = 452 g of CO2/kWh.
Unfortunately, the data is already old. It comes from a 2018 report, which seems to be about 2017 data.
CO2 emission of a Trabant per km
A Trabant emits 170 g of CO2 / km (source). (Another [source] reports 140g, but let’s say it emits a lot.)
Final computation
Trabant's kilometers = "Hours of computation" * "Power consumption of a worker per hour"
* "CO2 emission of electribity per kWh"
/ "CO2 emission of a trabant per km"
Trabant's kilometers = "Hours of computation" * 0.165 * 452 / 170
=> Trabant's kilometers = "Hours of computation" * 0.4387058823529412 **
All of this must be wrong
Except the durations! Everything else is highly subject to debate.
Sources are here, and contributions or feedback are welcomed.
You should see a page asking you to confirm testing this browser experiment.
Once you click on the install button, the current Firefox interface will be replaced on the fly.
This interface is a old version of Browser.html. But instead of requiring a custom runtime, this is just a regular web site, written with web technologies and fetched from github at every startup.
If you want to check if this is a regular web page, just look at the sources:
view-source:http://rawgit.com/ochameau/planula-browser-advanced/addon-demo/
If needed you can revert at any time back to default Firefox UI using the “Ctrl + Alt + R” shortcut.
Install a custom protocol handler for browserui:// in order to redirect to the install page,
The install page then communicate with a privileged script to set the “browser.chromeURL” preference which indicates the url of the top level document,
While we set this preference, we also grant additional permissions to the target url to use the “mozbrowser” property on iframes,
Finnaly, it reload the top level document with the target url.
The <iframe mozbrowser> tag, while beeing non-standard, allows an iframe to act similarly to a <xul:browser> or a <webview> tag. It allows to safely open websites within the interface. Webpages loaded inside it also run into a seperated content process (e10s) contrary to regular <iframe> tag.
Why?
Last year, during Whistler All Hands, there was this “Kill XUL” meeting.
Various options were discussed. But it is unclear that any has been really looked into.
Except may be the electron option, via Tofino project.
Then a thread was posted on firefox-dev. At least Go faster and new Test Pilot addons
started using HTML for new self-contained features of Firefox, which is already a great step forward!
But there was no experiment to measure how we could leverage HTML to build browsers within Mozilla.
Myself and Vivien started looking into this and ended up releasing this addon.
But we also have some more concrete plan on how to slowly migrate Firefox full of XUL and cryptic XPCOM/jsm/chrome technologies
to a mix of Web standards + Web extensions. We do have a way to make Web extensions to work within these new HTML interfaces.
Actually, it already supports basic features. When you open the browserui:// links, it actually opens an HTML page from a Web extension.
How to hack your own thing?
First, you need to host some html page somewhere.
Any website could be loaded. browserui://localhost/ if you are hosting files locally.
But you may also just load google if you want to browserui://google.com/.
Just remember the “Ctrl + Alt + R” shortcut to get back to the default Firefox UI!
The easiest is probably to fork this one-file minimal browser, or directly the demo browser.
Host it somewhere and open the right browserui:// url.
browserui:// just maps one to one to the same URL starting with “http” instead of “browserui”.
Given that this addon is just a showcase, we don’t support https yet.
Then, change the files, hit “Ctrl + R” and your browser UI is going to be reloaded, fetching resources again from http.
Once you have something you want to share, using github is handy.
If you push files to let say the “mozilla” account and “myui” as the repository name,
then you can share simply via the following link:
browserui://rawgit.com/mozilla/myui/master/
But there is many ways to control which particular version you want to share.
Sharing another branch, like the “demo” branch:
Demonstrate WebExtension based browser and the ability to implement Web Extension APIs from an HTML document.
Tweak the platform to handle OS integration better from low privileged HTML document.
Things like popup/panels, transparent windows, native OS controls, menus, …
Also tune the platform to be able to load existing browser features from HTML like about: pages, view-source:, devtools, …
Actually, we already have various patches to do that and would like to upstream them to Firefox!
What about using Web Extension APIs to implement core firefox features?
Here is the opportunity I would like to discuss today.
Not only new features (Hello, Pocket) but also existing built-in features (e.g Session Restore). I recently blogged about building it as a web extension.
Session restore is a critical feature of Firefox.
It uses many mozilla-only technologies: XUL, XPCOM, message managers, jsm and so on.
It also involves mostly privileged code whereas it isn’t really needed, possibly leading to security issues.
Even if it is living in it’s own folder /browser/components/sessionstore/, there are many hardcoded parts of it elsewhere.
It is clearly not self contained.
Instead of just hardcoding this feature into Firefox, we could possibly ship it as an addon.
That would have various benefits:
Let a chance to release this part of firefox faster than the platform,
Help us experiment by doing some A/B testing with two very different implementations,
Dogfooding Web Extension APIs would make them more stable and ensure they are both useful and powerful,
It should open ways to reuse these addons once Servo is ready and implements Web Extension APIs,
Last but not least, it dramatically reduces the contribution efforts required to modify a core Firefox feature:
Forget about building C++ and having a build environment,
You can possibly checkout a small repo instead of all mozilla-central,
Do not necessarily have to use various mozilla specific tools like mach,
No need to even build Firefox itself, instead you could fetch a nightly build and install the addon on it,
And forget about all cryptic technologies that we keep using as ancient relics like xul, xpcom and so on!
About contribution. I asked about how many people contribute(d) to session restore.
There is mostly one active employee working on it: mconley.
Then sparse contributions are being made by other employees like ttaubert, yoric, dragana, mystor, mayhemer,…
But there seem to be only one non-employee contribution made by Allasso Travesser with just one patch.
I’m convinved we can engage more with simplier workflows (Addon versus built-in) and technologies with a lower learning curve (Web Extension vs XUL).
Session Restore is a built-in Firefox feature which preserve user data after a crash or an unexpected close.
I spent a little time exploring if it is possible to build such a feature as an replaceable web extension.
Here is a sketch of session store implemented as a web extension:
This addon currently save’n restore:
tabs (the url for each tab and the tab order)
form values
scroll positions
Missing features (compared to the built-in session restore):
Does not restore session storage
Always restore the previous session
I have no idea what it does regarding private browsing
No dedicated about:sessionrestore page
Does not save tab history. Instead it just saves the current tab document/form/scroll
To get the above points working, it is a matter of time and possibly some tweaks to the current Web Extensions APIs.
Yes. It is possible to implement a core Firefox feature with the in-progress implementation of Web Extension APIs.
It also shows the limitations of the current Chrome APIs. For example in order to fully support tab history, the APIs may needs to be extended.
Source code is available on github.
A pre-release version is also available. Don’t forget to toggle xpinstall.signatures.required preference to false from about:config to be able to install it.
Mozilla ecosystem already has plenty of built-in features, scripts and addons to debug memory usage. But most of them are focused on the internals of the C++ codebase. These tools are very verbose and expose very-very few Javascript metadata. So that you have to start learning tons of internal C++ classes before being able to undersstand that your Javascript objects are actually visible in these tools output!
For now, when chasing Addon SDK memory leaks, I was just looking at overall memory usage and tried to read and re-read our codebase until I finally find the leak by seeing it in the code… But that practice may come to an end!
We should have Javascript-oriented memory debugging tool. With a clear picture of which objects are still allocated at a given point in time. Without any C++ aspect. With an output that any confirmed Javascript developer can easily read and understand without knowing much about how Mozilla engine works.
With that in mind, I started looking at the object CC/GC graph. This graph contains a view of all objects allocated dynamically by the Garbage collector. All Javascript objects end up being in this graph. But also way more other C++ objects that we will have to translate into a meaningfull Javascript paradigm to the developer.
Then I realized that an XPCOM component already expose the whole CC graph: nsICycleCollectorListener. But again, with very few Javascript information other than “this is a Javascript object”, or, “this is a javascript function”. Not much more. It ends up being quite frustrating as most of the information is there, we just miss few pinches of Javascript metadata.
Like:
what are the attributes of this object?
in which document it lives?
in which script it has been allocated?
in which line?
in which function?
what other Javascript objects refers this one?
what is the function name/source?
…
Finally, because of -or- thanks to the extra motivation given by padenot and vingtetun, I ended up doing crazy hacks to fetch this few information directly from Javascript: Call the jsapi library by using jsctypes with the object addresses given by the nsICycleCollectorListener interface. The benefit is that this experiment can run on any firefox release build (i.e. no need for a custom Firefox build). Using only JS also allows to experiment faster by avoiding the compiling phase. But that should definitely be kept as an experiment as I would not consider this as a safe practice!!
You can install this addon, it should work on Windows and Linux with FF20+. You can easily see bug 839280’s leaks on today’s Aurora (FF21) by opening firefox with this addon, then open and close the devtool inspector panel (CTRL+MAJ+I) and finally run the memory script by pressing ALT+SHIFT+D shortcut.
Wait a bit, the addon is processing the whole CC graph and will freeze your firefox instance. And then open a folder with a log file that displays various information about potential cross compartment leaks.
############################################################################
DOM Listener leak.
>>> Leaked listener ctypes.uint64_t.ptr(ctypes.UInt64("0x128a16c0")) - JS Object (Function)
Function source:
function () {
"use strict";
requisition.update(buttonSpec.typed);
//if (requisition.getStatus() == Status.VALID) {
requisition.exec();
/*
}
else {
console.error('incomplete commands not yet supported');
}
*/
}
>>> DOM Event target holding the listener ctypes.uint64_t.ptr(ctypes.UInt64("0x12a95f60"))
FragmentOrElement (XUL) toolbarbutton id='command-button-responsive' class='command-button' chrome://browser/content/devtools/framework/toolbox.xul
############################################################################
Scope variable leak.
>>> Function keeping 'button' scope variable alive ctypes.uint64_t.ptr(ctypes.UInt64("0xf9a1640")) - JS Object (Function)
Function source:
function () {
"use strict";
requisition.update(buttonSpec.typed);
//if (requisition.getStatus() == Status.VALID) {
requisition.exec();
/*
}
else {
console.error('incomplete commands not yet supported');
}
*/
}
It immediatly tells you that you may leak something via this anonymous function. may leak, and not do leak, as it is always hard to tell which references are expected to be removed or not, but at least, it tells you that this reference still exist and may keep your compartment/document/global alive.
To make it short, the script first search for FragmentOrElement objects in the CC and search for all objects from the same compartment. Then I focused my work on cross compartment leaks so that I looked for edges going from and to these objects. Finally I analysed each of these objects having references from and to other compartments and tried to translate C++ object patterns into a meaningfull sentence for the Javascript paradigm.
Now What?
I’d like to get feedback from people used to debug leaks (no matter the language) and also discuss with people used to gecko internals like nsXPCWrappedJS, JS Object (Call), … in order to know if assumptions I made here are correct. So that I can continue translating new potential C++ object patterns into meaningfull Javascript usecases.
Mozilla teams recently wrote tons of new API
in a very short period of time, mostly for Firefox OS, but not only.
As Firefox Desktop, Firefox Mobile and Firefox OS are based on the same source
code, some of these API can easily be enabled on Desktop and mobile.
Writing a new API can be seen as both complicated and simple. Depending on the
one you want to write, you don’t necessary need to write anything else than
Javascript code (for example the settings API).
That makes such task much more accessible and easier to prototype as you do
not enter in compile/run development cycles, nor have to build firefox before even trying to experiment. But there is a significant number of
mozilla specific knowledges to have before being able to write your API code.
The aim of this article is to write down a simple API example from ground
and try to explain all necessary things you need to know before writing an API
with the same level of expertise than what did Firefox OS engineers.
The example API: « CommonJS require »
Let’s say we would like to expose to websites a require() method that act like
the nodejs/commonjs method with the same name. This function allows you to load
javascript files exposing a precise interface, without polluting your current javascript scope.
So given the following javascript file:
Liquid error: undefined method `join’ for #<String:0x00000002648e88>
Any webpage will be able to use its hello function like this:
Liquid error: undefined method `join’ for #<String:0x00000002647e98>
Simpliest implementation possible
In this first example I stripped various advanced features in order to ease
jumping into Firefox internal code. I bundled this example as a Firefox addon
so that you can easily see it running and also hack it.
You can download it here. Once installed, you will have to relaunch Firefox,
open any webpage, then open a Web console and finally execute the
navigator.webapi.require code I just gave.
Now let’s see what’s inside.
This .xpi file is just a zip file so you can
open it and see three files:
install.rdf:
A really boring file describing our addon. The only two important
fields in this file are: <em:bootstrap>false</em:bootstrap> and
<em:unpack>true</em:unpack> required when you need to register a XPCOM file.
More info here.
chrome.manifest:
12345678910
# Those two lines allow to register the Javascript xpcom component defined in
# `web-api.js`
component {20bf1550-64b8-11e2-bcfd-0800200c9a77} web-api.js
contract @mozilla.org/webapi-example;1 {20bf1550-64b8-11e2-bcfd-0800200c9a77}
# That line registers the xpcom component in the "JavaScript-navigator-property"
# category which add it to the list of components that inject a new property in
# `navigator` web pages global object. The second argument defines the name of
# the property we would like to set.
category JavaScript-navigator-property webapi @mozilla.org/webapi-example;1
web-api.js:
And last but not least. The Javascript XPCOM file. XPCOM is a component object
model overused in Mozilla codebase.
More info here
Let’s analyse its content by pieces:
Liquid error: undefined method `join’ for #<String:0x00000002646de0>
I let you discover the implementation of the require method, but it will
be your job to implement such method. Now, you have the very minimal set where
you can tweak the returned value of the init method and expose your own
API to webpages.
Now note that this is a very minimal example. I’ll try to continue blogging
about that and eventually talk about:
interfaces definition,
custom event implementation,
other XPCOM categories (in order to inject on other object than navigator),
how to implement a cross process API (mandatory for Firefox OS),
During my on-boarding on Firefox OS team I kept a draft of all necessary stuff that need to be done in order to build the project and flash it to the phone.
I’m pretty sure it can help people on-boarding the project by having a single page that would allow anyone to start working on Firefox OS. I highly suggest you to take a look at MDN Firefox OS documentation if you visit this page later on, as this blogpost will most likely be outdated in some weeks.
Environnement
Use a Virtual Machine
I’m suggesting everyone to use a VM. It allows you to use exactly same environment, in order to maximize your chances to succeed building Firefox OS!
Using another OS, another linux distro or even another Ubuntu version will introduce differences in dependencies versions and can easily give you errors no one but you are facing :(
You can use VMware Player which is free and available here, or any other VM software you are confortable with that has decent USB support (required to flash the phone).
Use Ubuntu 11.10
For the same reason than the VM, I suggest you to use the recommended linux distro and version
You can download this Ubuntu 11.10 x64 ISO image and create a VM out of it (It is super easy with VMware, it almost does everything for you). The only important things are to set a large enough virtual drive, 30GB is a safe minimum, and enough memory, 4GB is a safe minimum.
Now open a terminal and launch all following commands in order to install all necessary dependencies.
# The following PPA allows you to easily install the JDK through apt-get
sudo add-apt-repository ppa:ferramroberto/java
sudo apt-get update
sudo apt-get install sun-java6-jdk
Android SDK in order install adb
12345678910111213141516171819
# Your first need to install 32 bit libs as we are using 64bit OS
# otherwise, you will have following error while running adb:
# $ adb: No such file or directory
sudo apt-get install ia32-libs
# There is no particular reason to use this SDK version
# It was the current version when I've installed it
wget http://dl.google.com/android/android-sdk_r20.0.3-linux.tgz
tar zxvf android-sdk_r20.0.3-linux.tgz
cd android-sdk-linux/
# The following command installs only "platform-tools" package which
# contains adb and fastboot
./tools/android update sdk --no-ui --filter 1,platform-tool
# Register adb in your PATH
echo "PATH=`pwd`/platform-tools:\$PATH" >> ~/.bashrc
# Execute in a new bash instance in order to gain from this new PATH
bash
Tweak udev in order to recognize your phone
If you do not do that at all, or not properly, $ adb devices will print this:
1
???????????? no permissions
You need to put the following content into /etc/udev/rules.d/51-android.rules
12345
cat <<EOF | sudo tee -a /etc/udev/rules.d/51-android.rules
SUBSYSTEM=="usb", ATTRS{idVendor}=="19d2", MODE="0666"
SUBSYSTEM=="usb", ATTRS{idVendor}=="18d1", MODE="0666"
EOF
sudo restart udev
Here I register only internal Mozilla phones otoro and unagi IDs.
You may want to add lines for other phones. See this webpage for other vendor IDs.
Checkout all necessary projects
Checkout B2G repository
1
git clone https://github.com/mozilla-b2g/B2G.git
Take a minute to configure git, otherwise next steps will keep bugging you asking for your name and email.
1234567
cat > ~/.gitconfig <<EOF
[user]
name = My name
email = me@mail.com
[color]
ui = auto
EOF
Connect your phone and ensure it is visible from your VM.
In order to do so run adb devices, you should see non-empty list of devices.
123
$ adb devices
List of devices attached
full_unagi device
If you see no permissions message, checkout udev step.
Note that you have to setup your Virtual machine software to connect the USB port to the VM. In VMware player, click on: Player menu > Removable devices > "...something..." Android > Connect (Disconnnect from host).
Checkout all dependencies necessary for your particular phone
Before running the following command, ensure that your phone is connected.
Note that you have to run this command with your phone still being on Android OS and ICS version. If your phone is already on B2G, you will have to retrieve the backup-otoro or backup-unagi folder automatically created when running the following command.
If your device is on an Android version older than ICS, you will have to flash it first to ICS. For both of these issues, ask in #b2g for help.
This step will take a while, as it will download tons of big projects: android, gong, kernel, mozilla-central, gaia,… More than 4GB of git repositories, so be patient.
123
cd B2G/
# Run ./config --help for the list of supported phones.
./config.sh unagi
Install Qualcomm Areno graphic driver
Only if you are aiming to build Firefox OS for otoro or unagi phones,
you will have to manually download Qualcomm areno armv7 graphic driver, available here.
Unfortunately, you will have to register to this website in order to be able to download this file. Once downloaded, put this Adreno200-AU_LINUX_ANDROID_ICS_CHOCO_CS.04.00.03.06.001.zip into your B2G directory.
Build Firefox OS
If ./config.sh went fine, you can now build Firefox OS!
You are most likely lacking of memory. 4GB is a safe minimum.
KeyedVector.h:193:31: error: indexOfKey was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive]
Your gcc version is too recent. Try using gcc 4.6.x version.
Flash the phone
If ./build.sh went fine, you can now flash your phone:
1
./flash.sh
Note that I have to unplug replug the device in order to make it work in the VM.
When running ./flash.sh, the unagi phone switch to a blue screen, then ./flash.sh script is stuck on < waiting device > message. If I unplug and plug in back, it immediately starts flashing. Be carefull if you have to do the same, ensure that ./flash.sh doesn’t start flashing when you unplug it!
If ./flash.sh failed by saying that the image is too large, It might mean that you have to root your phone first. Again, ask in #b2g for help.
page-mod API is the most commonly used API in jetpack. It allows to execute Javascript piece of code against any given website. It is very similar to greasemonkey and userscripts.
In Addon SDK version 1.11, which is due for October, 30th, we will bring various subtle but very important fixes, features and improvements to this API. In the meantime we will start releasing beta versions on tuesday (09/25) with 1.11b1.
Here is an overview of these changes:
You will now be able to execute page-mod scripts to already opened tab, by using the new attachTo option.
[bug 708190]
With the same attachTo option, you can execute page-mod scripts only on top-level tab documents, and so avoid being applied to iframes.
The following blogpost goes into detail about this new option.
[bug 684047]
page-mod now ignores non-tab documents like: panel, widget, sidebar, hidden document living in firefox’s hidden window, …
[bug 777632]
Your addon will be more efficient as we removed some costly workaround: the Javascript proxies layer between your content script and the page. We are now relying directly on C++ wrappers, also known as Xraywrappers. We are expecting a major improvement in term of memory and CPU usage. As this change depends on modifications made in Firefox, it will only be enabled on Firefox 17 and greater.
[bug 786976]
Content scripts are now correctly frozen when you go back and forth in tab history. Before that, your content script was still alive and could throw some unexpected exception or modify an unexpected document.
[bug 766088]
Random fixes: window.top and window.parent will be correct for iframes [bug 784431].
Last but not least and still at risk for 1.11 release. You will be able to extend priviledges of your content script to extra domains. So that your script will now be able to execute some action on your own domain in addition to the current page domain, without facing cross domain limitations. This rely on some improvements being made to Firefox and will only be enabled on Firefox 17+.
[bug 786681]
It is realy exciting to see our most used API receiving so many improvements and I hope that we fixed most of the long-living issues you may have faced with page-mod!!
We would really like to get your feedback on these changes. If you find anything wrong, please, file bugs here and do not hesitate to come discuss with our team in the mailing-list
In a previous post, I’ve described my first proposal for localization support in jetpack addons. I’ve decided to change locale files format for YAML instead of JSON. During MozCamp event, folks helped me identifying some pitfalls with JSON:
No multiline string support. Firefox parser allows multiline but it is not officialy supported! So that it will disallow third party tools to work properly.
No easy way to add comments. It is mandatory for localizers to have context description in comments next to keys to translate. As there is no way to add comments in JSON, it will end up complexifying a lot locale format.
Example
French locale file in YAML format
12345678910111213141516171819202122
# You can add comments with `#`...Hello %s:Bonjour %s# almost ...hello_key:Bonjour %s# wherever you want!# For multiline, you need to indent your string with spacesmultiline:"Bonjour%s"# Plural forms.# we use a nested object with attributes that depends on the target language# in english, we only have 'one' (for 1) and 'other' (for everything but 1)# in french, it is the same except that 'one' match 0 and 1# in some language like Polish, there is 4 forms and 6 in arabic## So that having a structured format like YAML,# help us writing these translations!pluralString:one:"%stelechargement"other:"%stelechargements"# I need to enclode these strings with `"` because of %. See note after.
Addon code
123456789101112
// Get a reference to `_` gettext method with:const_=require("l10n").get;// These three forms end up returning the same string.// We can still use a locale string in code, or use a key.// And multiline string gets its `\n` removed. (there is a way to keep them)_("Hello %s","alex")==_("hello_key","alex")==_("multiline","alex")// Example of non-naive l10n feature, plurals:_("pluralString",0)=="0 telechargement"_("pluralString",1)=="1 telechargement"_("pluralString",10)=="10 telechargements"
Advantages of YAML
Multiline strings are supported nicely / easy to read. You do not need to add a final \ on all lines. As mulitiline is easier, localizers can use them more often and it will surely improve readability of locale files!
Structured data format. we can use this power whenever it is needed. For example, when we need to implement complex l10n features like plural forms or any feature that goes beyond simple 1-1 localization. The cool thing if we compare to JSON is that even if we define structures, we keep a really simple format with no noise (like {, }, “, …).
As nothing comes without any issues, here is what I’ve found around YAML:
This format is not a Web standard. I don’t think it makes much sense to avoid using it because of that. We are clearly missing a standardized format for localization in the web world.
You may hit some issues when you do not enclose your strings with " or '. For example, you can’t start a string with %, nor having a : in middle of your string without enclosing it.
Even if YAML is not a web standard, it has been formaly specified. And unfortunately, a handy feature becomes a pitfall for our purpose! Some strings are automatically converted. Yes, True, False, … are automatically converted to a boolean value. We can work around this in multiple ways, either by documenting it, or modifying the parser. The same solution apply here, you need to enclose your string with quotes.
I’m going to describe the first proposal of localization support for Jetpack.
This approach uses gettext pattern and json files for locales.
It is the first step of multiple iterations. This one only allows retrieving localized string in javascript code.
We are going to give ways to translate files, mainly HTML files, through another iteration.
And we are about to offer an online tool to ease addon localization (like babelzilla website).
Let’s start by looking at a concrete example, then I’ll justify our different choices.
// Retrieve a dynamic reference to `_` gettext method with:const_=require("l10n").get;// Then print to the console a localized string:console.log(_("Hello %s","alex"));// => Prints "Bonjour alex" in french.// Or, if we don't want to use localized string in addon code:console.log(_("hello_user","alex"));
Why gettext?
It gives a way to automatically fetch localizable strings or ids from source code
by searching for _( ) pattern.
It allows to use either strings or IDs as value to translate.
It is obviously better to use IDs. Because locales will broke
each time addon developer fix a typo in the main language hard coded in the code.
But we should not forget that the high level APIs is trying to
simplify addon development. So that it has to be really easy to translate a simple
addon that has only 2 JS files and less than 50 lines of code!
And the simple fact to mandatory require a locale file for the default language
appears like a big burden for such small addon.
Having said that, I’m really happy that gettext approach doesn’t discourage, nor
makes it harder to use IDs, and so, if an addon developer build a big addon
or just want to take more time to use better pratice, he still can do it, easily!
Why JSON for locales?
We could have used properties files, like XUL addons. But this format has some
limitations that are not compatible with gettext pattern. Keys can’t contain spaces
and are limited to ASCII or something alike, so that we can’t put text in a key.
So instead of using yet another specific format, I’m suggesting here to use JSON.
JSON is really easy to parse and generate from both client and server side,
and I’m convinced that it is simple enough to be edited with a text editor.
On top of that we can build a small web application to ease localization.
In my very first proposal, I used a complex JSON object with nested attributes.
But it ends up complexifying the whole story without real advantage.
So I’m suggesting now to use the most simple JSON file we can require:
one big object with keys being strings or id to translate and values being translated strings.
Then we will be able to use JSON features to implement complex localization features,
like plurals handling. So that values may be an array of plurals forms.
The big picture
Everything starts with one addon developer or one of its contributor.
If one of them want to make the addon localizable, they have to use this new localization module.
1
const_=require("l10n").get;
There is already multiple choices that has been made here:
_ is not a magic global. We need to explicitely require it.
This choice will simplify compatibility with other CommonJS environnements, like NodeJS.
The name of the module itself is l10n instead of localization in order to ease the use of it.
This module expose _ function on get attribute in order to be able to
expose another methods. I’m quite confident we will need some functions for plurals or files localization.
Then, they need to use _ on localizable strings:
12345
varcm=require("context-menu");cm.Item({label:_("My Menu Item"),context:cm.URLContext("*.mozilla.org")});
Now, they have two choices:
use a string written in their prefered language, like here.
So that they don’t have to create a locale file.
use an ID. Instead of _("My Menu Item"), we will use: _("contextMenuLabel").
But it forces to create a localization file in order to map contextMenuLabel to My Menu Item.
Then, either a developer or a localizer can generate or modify locales files.
Each jetpack package can have its own locale folder.
This folder contains one JSON file per supported language.
Here is how looks like a jetpack addon:
* my-addon/
* package.json # manifest file with addon name, description, version, ...
* data/ # folder for all static files
* images,
* html files,
* ...
* lib / # folder that contains all JS modules:
* main.js # main module to execute on startup
* my-module.js # custom module that may use localization module
* ...
* locale/ # our main interest!
* en-US.json
* fr-FR.json
* en-GB.json
* ...
The next iteration will add a new feature to our command line tool,
that is going to generate or update a locale file for a given language by fetching localization strings from source code.
For example, the following command will generate my-addon/locale/fr-FR.json file:
1
$ cfx fetch-locales fr-FR
my-addon/locale/fr-FR.json
123
{"My Menu Item":"My Menu Item"}
Finally, we need to replace right side values with the localized strings:
123
{"My Menu Item":"Mon menu"}
And build the final addon XPI file with:
1
$ cfx xpi
Any kind of feedback would be highly appreciated on this group thread.
If you want to follow this work,
subscribe to bug 691782.