-
Notifications
You must be signed in to change notification settings - Fork 6
How does it fit together?
Energy Sparks is a relatively straightforward Ruby on Rails web application with a Postgres database. It uses customised Bootstrap for styling.
The application was initially developed using sprockets for asset manangement, but following the upgrade to Rails 6 and the addition of ActionText, the webpacker asset pipeline was also added. They work in harmony together but it would be good one day to migrate everything from sprockets to webpacker.
A CDN is used on test and production to handle all static assets. This is provided by AWS CloudFront
The production application uses a file store cache. Previously it did use in-memory cache, but the analytics code has a high memory requirement for an aggregated school and memory is more limited than file space on the server.
The Devise and CanCanCan libraries are used for user authentication and authorisation
A cron job runs every day to generate a sitemap, which is then passed on to Google. This uses the Site map generator gem.
The analytics code is hosted in a separate github repository and is used to provide the advanced analytics of the usage data, weather data etc. It is a required dependency of the web application.
Most of the charting is handled by the analytics too, along with the equivalences.
The repo is referenced in the Gemfile as a github repo, with versioning to enable versioned releases.
Currently the code is namespaced as 'dashboard' which is an historical anomaly as the analytics code was originally providing content for an analytics dashboard. A future refactoring should move it in to a more appropriate namespace.
It's all processed via S3 bucket.
Energy Sparks currently supports three methods of data retrieval to get the data in to the S3 bucket:
This then goes through:
Although the application now has the validated data, the analytics code requires a more nuanced view of the data. For example, if a single meter is used for Storage Heaters and for general consumption, then the analytics would like these separated out in to two separate pseudo-meters, to support further analysis and charts.
This aggregation process is run every day for all the schools as part of the content generation process, it can take a significant time and memory to process, so it cannot be run 'on the fly'. The validated data is passed to the analytics, which performs the aggregation and returns all of the aggregated data back to the main application. This data is then cached ready for immediate re-use.