Blog

  • cd-docker-30

    Hackathon Starter

    Dependency Status Build Status Join the chat at https://gitter.im/sahat/hackathon-starter

    Live Demo: https://hackathon-starter-2018.herokuapp.com

    Jump to What’s new?

    A boilerplate for Node.js web applications.

    If you have attended any hackathons in the past, then you know how much time it takes to get a project started: decide on what to build, pick a programming language, pick a web framework, pick a CSS framework. A while later, you might have an initial project up on GitHub and only then can other team members start contributing. Or how about doing something as simple as Sign in with Facebook authentication? You can spend hours on it if you are not familiar with how OAuth 2.0 works.

    When I started this project, my primary focus was on simplicity and ease of use. I also tried to make it as generic and reusable as possible to cover most use cases of hackathon web apps, without being too specific. In the worst case you can use this as a learning guide for your projects, if for example you are only interested in Sign in with Google authentication and nothing else.

    Testimonials

    “Nice! That README alone is already gold!”
    — Adrian Le Bas

    “Awesome. Simply awesome.”
    — Steven Rueter

    “I’m using it for a year now and many projects, it’s an awesome boilerplate and the project is well maintained!”
    — Kevin Granger

    “Small world with Sahat’s project. We were using his hackathon starter for our hackathon this past weekend and got some prizes. Really handy repo!”
    — Interview candidate for one of the companies I used to work with.

    Modern Theme

    Flatly Bootstrap Theme

    API Examples

    Table of Contents

    Features

    • Local Authentication using Email and Password
    • OAuth 1.0a Authentication via Twitter
    • OAuth 2.0 Authentication via Facebook, Google, GitHub, LinkedIn, Instagram
    • Flash notifications
    • MVC Project Structure
    • Node.js clusters support
    • Sass stylesheets (auto-compiled via middleware)
    • Bootstrap 4 + Extra Themes
    • Contact Form (powered by Mailgun, Sendgrid or Mandrill)
    • Account Management
    • Gravatar
    • Profile Details
    • Change Password
    • Forgot Password
    • Reset Password
    • Link multiple OAuth strategies to one account
    • Delete Account
    • CSRF protection
    • API Examples: Facebook, Foursquare, Last.fm, Tumblr, Twitter, Stripe, LinkedIn and more.

    Prerequisites

    Note: If you are new to Node or Express, I recommend to watch Node.js and Express 101 screencast by Alex Ford that teaches Node and Express from scratch. Alternatively, here is another great tutorial for complete beginners – Getting Started With Node.js, Express, MongoDB.

    Getting Started

    The easiest way to get started is to clone the repository:

    # Get the latest snapshot
    git clone https://github.com/sahat/hackathon-starter.git myproject
    
    # Change directory
    cd myproject
    
    # Install NPM dependencies
    npm install
    
    # Then simply start your app
    node app.js

    Warning: If you want to use some api that need https to work (for example pinterest or facebook), you will need to download ngrok. You must start ngrok after starting the project.

    # start ngrok to intercept the data exchanged on port 8080
    ./ngrok http 8080

    Next, you must use the https url defined by ngrok, for example https://hackaton.ngrok.io

    Note: I highly recommend installing Nodemon. It watches for any changes in your node.js app and automatically restarts the server. Once installed, instead of node app.js use nodemon app.js. It will save you a lot of time in the long run, because you won’t need to manually restart the server each time you make a small change in code. To install, run sudo npm install -g nodemon.

    Obtaining API Keys

    To use any of the included APIs or OAuth authentication methods, you will need to obtain appropriate credentials: Client ID, Client Secret, API Key, or Username & Password. You will need to go through each provider to generate new credentials.

    Hackathon Starter 2.0 Update: I have included dummy keys and passwords for all API examples to get you up and running even faster. But don’t forget to update them with your credentials when you are ready to deploy an app.

    • Visit Google Cloud Console
    • Click on the Create Project button
    • Enter Project Name, then click on Create button
    • Then click on APIs & auth in the sidebar and select API tab
    • Click on Google+ API under Social APIs, then click Enable API
    • Next, under APIs & auth in the sidebar click on Credentials tab
    • Click on Create new Client ID button
    • Select Web Application and click on Configure Consent Screen
    • Fill out the required fields then click on Save
    • In the Create Client ID modal dialog:
    • Application Type: Web Application
    • Authorized Javascript origins: http://localhost:8080
    • Authorized redirect URI: http://localhost:8080/auth/google/callback
    • Click on Create Client ID button
    • Copy and paste Client ID and Client secret keys into .env

    Note: When you ready to deploy to production don’t forget to add your new url to Authorized Javascript origins and Authorized redirect URI, e.g. http://my-awesome-app.herokuapp.com and http://my-awesome-app.herokuapp.com/auth/google/callback respectively. The same goes for other providers.


    • Visit Snap Kit Developer Portal
    • Click on the + button to create an app
    • Enter a name for your app
    • Enable the scopes that you will want to use in your app
    • Click on the Continue button
    • Find the Kits section and make sure that Login Kit is enabled
    • Find the Redirect URLs section, click the + Add button, and enter http://localhost:8080/auth/snapchat/callback
    • Find the Development Environment section. Click the Generate button next to the Confidential OAuth2 Client heading within it.
    • Copy and paste the generated Private Key and OAuth2 Client ID keys into .env
    • Note: OAuth2 Client ID is SNAPCHAT_ID, Private Key is SNAPCHAT_SECRET in .env
    • To prepare the app for submission, fill out the rest of the required fields: Category, Description, Privacy Policy Url, and App Icon

    Note: For production use, don’t forget to:

    • generate a Confidential OAuth2 Client in the Production Environment and use the production Private Key and OAuth2 Client ID
    • add the production url to Redirect URLs section, e.g. http://my-awesome-app.herokuapp.com/auth/snapchat/callback
    • submit the app for review and wait for approval

    • Visit Facebook Developers
    • Click My Apps, then select *Add a New App from the dropdown menu
    • Enter a new name for your app
    • Click on the Create App ID button
    • Find the Facebook Login Product and click on Facebook Login
    • Instead of going through their Quickstart, click on Settings for your app in the top left corner
    • Copy and paste App ID and App Secret keys into .env
    • Note: App ID is FACEBOOK_ID, App Secret is FACEBOOK_SECRET in .env
    • Enter localhost under App Domains
    • Choose a Category that best describes your app
    • Click on + Add Platform and select Website
    • Enter http://localhost:8080 under Site URL
    • Click on the Settings tab in the left nav under Facebook Login
    • Enter http://localhost:8080/auth/facebook/callback under Valid OAuth redirect URIs

    Note: After a successful sign in with Facebook, a user will be redirected back to home page with appended hash #_=_ in the URL. It is not a bug. See this Stack Overflow discussion for ways to handle it.


    • Go to Account Settings
    • Select Developer settings from the sidebar
    • Then inside click on Register new application
    • Enter Application Name and Homepage URL
    • For Authorization Callback URL: http://localhost:8080/auth/github/callback
    • Click Register application
    • Now copy and paste Client ID and Client Secret keys into .env file

    • Sign in at https://apps.twitter.com
    • Click Create a new application
    • Enter your application name, website and description
    • For Callback URL: http://127.0.0.1:8080/auth/twitter/callback
    • Go to Settings tab
    • Under Application Type select Read and Write access
    • Check the box Allow this application to be used to Sign in with Twitter
    • Click Update this Twitter’s applications settings
    • Copy and paste Consumer Key and Consumer Secret keys into .env file

    • Sign in at LinkedIn Developer Network
    • From the account name dropdown menu select API Keys
    • It may ask you to sign in once again
    • Click + Add New Application button
    • Fill out all the required fields
    • OAuth 2.0 Redirect URLs: http://localhost:8080/auth/linkedin/callback
    • JavaScript API Domains: http://localhost:8080
    • For Default Application Permissions make sure at least the following is checked:
    • r_basicprofile
    • Finish by clicking Add Application button
    • Copy and paste API Key and Secret Key keys into .env file
    • API Key is your clientID
    • Secret Key is your clientSecret

    • Sign up or log into your dashboard
    • Click on your profile and click on Account Settings
    • Then click on API Keys
    • Copy the Secret Key. and add this into .env file

    • Visit PayPal Developer
    • Log in to your PayPal account
    • Click Applications > Create App in the navigation bar
    • Enter Application Name, then click Create app
    • Copy and paste Client ID and Secret keys into .env file
    • App ID is client_id, App Secret is client_secret
    • Change host to api.paypal.com if you want to test against production and use the live credentials


    • Go to http://www.tumblr.com/oauth/apps
    • Once signed in, click +Register application
    • Fill in all the details
    • For Default Callback URL: http://localhost:8080/auth/tumblr/callback
    • Click ✔Register
    • Copy and paste OAuth consumer key and OAuth consumer secret keys into .env file


    • Go to https://sendgrid.com/user/signup
    • Sign up and confirm your account via the activation email
    • Then enter your SendGrid Username and Password into .env file

    • Go to http://www.mailgun.com
    • Sign up and add your Domain Name
    • From the domain overview, copy and paste the default SMTP Login and Password into .env file

    • Go to https://www.twilio.com/try-twilio
    • Sign up for an account.
    • Once logged into the dashboard, expand the link ‘show api credentials’
    • Copy your Account Sid and Auth Token

    Project Structure

    Name Description
    config/passport.js Passport Local and OAuth strategies, plus login middleware.
    controllers/api.js Controller for /api route and all api examples.
    controllers/contact.js Controller for contact form.
    controllers/home.js Controller for home page (index).
    controllers/user.js Controller for user account management.
    models/User.js Mongoose schema and model for User.
    public/ Static assets (fonts, css, js, img).
    public/js/application.js Specify client-side JavaScript dependencies.
    public/js/main.js Place your client-side JavaScript here.
    public/css/main.scss Main stylesheet for your app.
    public/css/themes/default.scss Some Bootstrap overrides to make it look prettier.
    views/account/ Templates for login, password reset, signup, profile.
    views/api/ Templates for API Examples.
    views/partials/flash.pug Error, info and success flash notifications.
    views/partials/header.pug Navbar partial template.
    views/partials/footer.pug Footer partial template.
    views/layout.pug Base template.
    views/home.pug Home page template.
    .dockerignore Folder and files ignored by docker usage.
    .env.example Your API keys, tokens, passwords and database URI.
    .eslintrc Rules for eslint linter.
    .gitignore Folder and files ignored by git.
    .travis.yml Configuration files for continue integration.
    app.js The main application file.
    docker-compose.yml Docker compose configuration file.
    Dockerfile Docker configuration file.
    package.json NPM dependencies.
    package-lock.json Contains exact versions of NPM dependencies in package.json.

    Note: There is no preference how you name or structure your views. You could place all your templates in a top-level views directory without having a nested folder structure, if that makes things easier for you. Just don’t forget to update extends ../layout and corresponding res.render() paths in controllers.

    List of Packages

    Package Description
    @octokit/rest GitHub API library.
    bcrypt-nodejs Library for hashing and salting user passwords.
    body-parser Node.js body parsing middleware.
    chai BDD/TDD assertion library.
    chalk Terminal string styling done right.
    cheerio Scrape web pages using jQuery-style syntax.
    clockwork Clockwork SMS API library.
    compression Node.js compression middleware.
    connect-mongo MongoDB session store for Express.
    dotenv Loads environment variables from .env file.
    errorhandler Development-only error handler middleware.
    eslint Linter JavaScript.
    eslint-config-airbnb-base Configuration eslint by airbnb.
    eslint-plugin-chai-friendly Makes eslint friendly towards Chai.js ‘expect’ and ‘should’ statements.
    eslint-plugin-import ESLint plugin with rules that help validate proper imports.
    express Node.js web framework.
    express-flash Provides flash messages for Express.
    express-session Simple session middleware for Express.
    express-status-monitor Reports real-time server metrics for Express.
    express-validator Easy form validation for Express.
    fbgraph Facebook Graph API library.
    instagram-node Instagram API library.
    lastfm Last.fm API library.
    lob Lob API library.
    lusca CSRF middleware.
    mocha Test framework.
    mongoose MongoDB ODM.
    morgan HTTP request logger middleware for node.js.
    multer Node.js middleware for handling multipart/form-data.
    node-foursquare Foursquare API library.
    node-linkedin LinkedIn API library.
    node-sass Node.js bindings to libsass.
    node-sass-middleware Sass middleware compiler.
    nyc Coverage test.
    nodemailer Node.js library for sending emails.
    passport Simple and elegant authentication library for node.js.
    passport-facebook Sign-in with Facebook plugin.
    passport-github Sign-in with GitHub plugin.
    passport-google-oauth Sign-in with Google plugin.
    passport-instagram Sign-in with Instagram plugin.
    passport-linkedin-oauth2 Sign-in with LinkedIn plugin.
    passport-local Sign-in with Username and Password plugin.
    passport-openid Sign-in with OpenId plugin.
    passport-oauth Allows you to set up your own OAuth 1.0a and OAuth 2.0 strategies.
    passport-snapchat Sign-in with Snapchat plugin.
    passport-twitter Sign-in with Twitter plugin.
    paypal-rest-sdk PayPal APIs library.
    pug (jade) Template engine for Express.
    request Simplified HTTP request library.
    sinon Test spies, stubs and mocks for JavaScript.
    sinon-mongoose Extend Sinon stubs for Mongoose methods to test chained methods easily.
    stripe Offical Stripe API library.
    supertest HTTP assertion library.
    tumblr.js Tumblr API library.
    twilio Twilio API library.
    twit Twitter API library.
    validator Used in conjunction with express-validator in controllers/api.js.

    Useful Tools and Resources

    • JavaScripting – The Database of JavaScript Libraries
    • JS Recipes – JavaScript tutorials for backend and frontend development.
    • HTML to Pug converter – HTML to PUG is a free online converter helping you to convert html files to pug syntax in realtime.
    • JavascriptOO – A directory of JavaScript libraries with examples, CDN links, statistics, and videos.
    • Favicon Generator – Generate favicons for PC, Android, iOS, Windows 8.

    Recommended Design Resources

    Recommended Node.js Libraries

    • Nodemon – Automatically restart Node.js server on code changes.
    • geoip-lite – Geolocation coordinates from IP address.
    • Filesize.js – Pretty file sizes, e.g. filesize(265318); // "265.32 kB".
    • Numeral.js – Library for formatting and manipulating numbers.
    • Node Inspector – Node.js debugger based on Chrome Developer Tools.
    • node-taglib – Library for reading the meta-data of several popular audio formats.
    • sharp – Node.js module for resizing JPEG, PNG, WebP and TIFF images.

    Recommended Client-side Libraries

    • Framework7 – Full Featured HTML Framework For Building iOS7 Apps.
    • InstantClick – Makes your pages load instantly by pre-loading them on mouse hover.
    • NProgress.js – Slim progress bars like on YouTube and Medium.
    • Hover – Awesome CSS3 animations on mouse hover.
    • Magnific Popup – Responsive jQuery Lightbox Plugin.
    • jQuery Raty – Star Rating Plugin.
    • Headroom.js – Hide your header until you need it.
    • X-editable – Edit form elements inline.
    • Offline.js – Detect when user’s internet connection goes offline.
    • Alertify.js – Sweet looking alerts and browser dialogs.
    • selectize.js – Styleable select elements and input tags.
    • drop.js – Powerful Javascript and CSS library for creating dropdowns and other floating displays.
    • scrollReveal.js – Declarative on-scroll reveal animations.

    Pro Tips

    • When installing an NPM package, add a –save flag, and it will be automatically added to package.json as well. For example, npm install --save moment.
    • Use async.parallel() when you need to run multiple asynchronous tasks, and then render a page, but only when all tasks are completed. For example, you might want to scrape 3 different websites for some data and render the results in a template after all 3 websites have been scraped.
    • Need to find a specific object inside an Array? Use _.find function from Lodash. For example, this is how you would retrieve a Twitter token from database: var token = _.find(req.user.tokens, { kind: 'twitter' });, where 1st parameter is an array, and a 2nd parameter is an object to search for.

    FAQ

    Why do I get 403 Error: Forbidden when submitting a form?

    You need to add the following hidden input element to your form. This has been added in the pull request #40 as part of the CSRF protection.

    input(type='hidden', name='_csrf', value=_csrf)
    

    Note: It is now possible to whitelist certain URLs. In other words you can specify a list of routes that should bypass CSRF verification check.

    Note 2: To whitelist dynamic URLs use regular expression tests inside the CSRF middleware to see if req.originalUrl matches your desired pattern.

    I am getting MongoDB Connection Error, how do I fix it?

    That’s a custom error message defined in app.js to indicate that there was a problem connecting to MongoDB:

    mongoose.connection.on('error', (err) => {
      console.error(err);
      console.log('%s MongoDB connection error. Please make sure MongoDB is running.', chalk.red('✗'));
      process.exit();
    });

    You need to have a MongoDB server running before launching app.js. You can download MongoDB here, or install it via a package manager. Windows users, read Install MongoDB on Windows.

    Tip: If you are always connected to the internet, you could just use MongoDB Atlas or Compose instead of downloading and installing MongoDB locally. You will only need to update database credentials in .env file.

    I get an error when I deploy my app, why?

    Chances are you haven’t changed the Database URI in .env. If MONGODB is set to localhost, it will only work on your machine as long as MongoDB is running. When you deploy to Heroku, OpenShift or some other provider, you will not have MongoDB running on localhost. You need to create an account with MongoDB Atlas or Compose, then create a free tier database. See Deployment for more information on how to setup an account and a new database step-by-step with MongoDB Atlas.

    Why Pug (Jade) instead of Handlebars?

    When I first started this project I didn’t have any experience with Handlebars. Since then I have worked on Ember.js apps and got myself familiar with the Handlebars syntax. While it is true Handlebars is easier, because it looks like good old HTML, I have no regrets picking Jade over Handlebars. First off, it’s the default template engine in Express, so someone who has built Express apps in the past already knows it. Secondly, I find extends and block to be indispensable, which as far as I know, Handlebars does not have out of the box. And lastly, subjectively speaking, Jade looks much cleaner and shorter than Handlebars, or any non-HAML style for that matter.

    Why do you have all routes defined in app.js?

    For the sake of simplicity. While there might be a better approach, such as passing app context to each controller as outlined in this blog, I find such style to be confusing for beginners. It took me a long time to grasp the concept of exports and module.exports, let alone having a global app reference in other files. That to me is a backward thinking. The app.js is the “heart of the app”, it should be the one referencing models, routes, controllers, etc. When working solo on small projects I actually prefer to have everything inside app.js as is the case with this REST API server.

    How do I switch SendGrid for another email delivery service, like Mailgun or SparkPost?

    Inside the nodemailer.createTransport method arguments, simply change the service from 'Sendgrid' to some other email service. Also, be sure to update both username and password below that. See the list of all supported services by Nodemailer.

    How It Works (mini guides)

    This section is intended for giving you a detailed explanation about how a particular functionality works. Maybe you are just curious about how it works, or maybe you are lost and confused while reading the code, I hope it provides some guidance to you.

    Custom HTML and CSS Design 101

    HTML5 UP has many beautiful templates that you can download for free.

    When you download the ZIP file, it will come with index.html, images, css and js folders. So, how do you integrate it with Hackathon Starter? Hackathon Starter uses Bootstrap CSS framework, but these templates do not. Trying to use both CSS files at the same time will likely result in undesired effects.

    Note: Using the custom templates approach, you should understand that you cannot reuse any of the views I have created: layout, home page, api browser, login, signup, account management, contact. Those views were built using Bootstrap grid and styles. You will have to manually update the grid using a different syntax provided in the template. Having said that, you can mix and match if you want to do so: Use Bootstrap for main app interface, and a custom template for a landing page.

    Let’s start from the beginning. For this example I will use Escape Velocity template: Alt

    Note: For the sake of simplicity I will only consider index.html, and skip left-sidebar.html, no-sidebar.html, right-sidebar.html.

    Move all JavaScript files from html5up-escape-velocity/js to public/js. Then move all CSS files from html5up-escape-velocity/css to public/css. And finally, move all images from html5up-escape-velocity/images to public/images. You could move it to the existing img folder, but that would require manually changing every img reference. Grab the contents of index.html and paste it into HTML To Pug.

    Note: Do not forget to update all the CSS and JS paths accordingly.

    Create a new file escape-velocity.pug and paste the Pug markup in views folder. Whenever you see the code res.render('account/login') – that means it will search for views/account/login.pug file.

    Let’s see how it looks. Create a new controller escapeVelocity inside controllers/home.js:

    exports.escapeVelocity = (req, res) => {
      res.render('escape-velocity', {
        title: 'Landing Page'
      });
    };

    And then create a route in app.js. I placed it right after the index controller:

    app.get('/escape-velocity', homeController.escapeVelocity);

    Restart the server (if you are not using nodemon), then you should see the new template at http://localhost:8080/escape-velocity.

    I will stop right here, but if you would like to use this template as more than just a single page, take a look at how these Pug templates work: layout.pug – base template, index.pug – home page, partials/header.pug – Bootstrap navbar, partials/footer.pug – sticky footer. You will have to manually break it apart into smaller pieces. Figure out which part of the template you want to keep the same on all pages – that’s your new layout.pug. Then, each page that changes, be it index.pug, about.pug, contact.pug will be embedded in your new layout.pug via block content. Use existing templates as a reference.

    This is a rather lengthy process, and templates you get from elsewhere, might have yet another grid system. That’s why I chose Bootstrap for the Hackathon Starter. Many people are already familiar with Bootstrap, plus it’s easy to get started with it if you have never used Bootstrap. You can also buy many beautifully designed Bootstrap themes at Themeforest, and use them as a drop-in replacement for Hackathon Starter. However, if you would like to go with a completely custom HTML/CSS design, this should help you to get started!


    How do flash messages work in this project?

    Flash messages allow you to display a message at the end of the request and access it on next request and only next request. For instance, on a failed login attempt, you would display an alert with some error message, but as soon as you refresh that page or visit a different page and come back to the login page, that error message will be gone. It is only displayed once. This project uses express-flash module for flash messages. And that module is built on top of connect-flash, which is what I used in this project initially. With express-flash you don’t have to explicitly send a flash message to every view inside res.render(). All flash messages are available in your views via messages object by default, thanks to express-flash.

    Flash messages have a two-step process. You use req.flash('errors', { msg: 'Error messages goes here' } to create a flash message in your controllers, and then display them in your views:

    if messages.errors
      .alert.alert-danger.fade.in
        for error in messages.errors
          div= error.msg

    In the first step, 'errors' is the name of a flash message, which should match the name of the property on messages object in your views. You place alert messages inside if message.errors because you don’t want to show them flash messages are actually present. The reason why you pass an error like { msg: 'Error messages goes here' } instead of just a string – 'Error messages goes here', is for the sake of consistency. To clarify that, express-validator module which is used for validating and sanitizing user’s input, returns all errors as an array of objects, where each object has a msg property with a message why an error has occurred. Here is a more general example of what express-validator returns when there are errors present:

    [
      { param: "name", msg: "Name is required", value: "<received input>" },
      { param: "email", msg: "A valid email is required", value: "<received input>" }
    ]

    To keep consistent with that style, you should pass all flash messages as { msg: 'My flash message' } instead of a string. Otherwise you will just see an alert box without an error message. That is because, in partials/flash.pug template it will try to output error.msg (i.e. "My flash message".msg), in other words it will try to call a msg method on a String object, which will return undefined. Everything I just mentioned about errors, also applies to “info” and “success” flash messages, and you could even create a new one yourself, such as:

    Data Usage Controller (Example)

    req.flash('warning', { msg: 'You have exceeded 90% of your data usage' });
    

    User Account Page (Example)

    if messages.warning
      .alert.alert-warning.fade.in
        for warning in messages.warning
          div= warning.msg

    partials/flash.pug is a partial template that contains how flash messages are formatted. Previously, flash messages were scattered throughout each view that used flash messages (contact, login, signup, profile), but now, thankfully it is uses a DRY approach.

    The flash messages partial template is included in the layout.pug, along with footer and navigation.

    body
        include partials/header
    
        .container
          include partials/flash
          block content
    
        include partials/footer

    If you have any further questions about flash messages, please feel free to open an issue and I will update this mini-guide accordingly, or send a pull request if you would like to include something that I missed.


    How do I create a new page?

    A more correct way to say this would be “How do I create a new route?” The main file app.js contains all the routes. Each route has a callback function associated with it. Sometimes you will see 3 or more arguments to routes. In cases like that, the first argument is still a URL string, while middle arguments are what’s called middleware. Think of middleware as a door. If this door prevents you from continuing forward, you won’t get to your callback function. One such example is a route that requires authentication.

    app.get('/account', passportConfig.isAuthenticated, userController.getAccount);

    It always goes from left to right. A user visits /account page. Then isAuthenticated middleware checks if you are authenticated:

    exports.isAuthenticated = (req, res, next) => {
      if (req.isAuthenticated()) {
        return next();
      }
      res.redirect('/login');
    };

    If you are authenticated, you let this visitor pass through your “door” by calling return next();. It then proceeds to the next middleware until it reaches the last argument, which is a callback function that typically renders a template on GET requests or redirects on POST requests. In this case, if you are authenticated, you will be redirected to Account Management page, otherwise you will be redirected to Login page.

    exports.getAccount = (req, res) => {
      res.render('account/profile', {
        title: 'Account Management'
      });
    };

    Express.js has app.get, app.post, app.put, app.delete, but for the most part you will only use the first two HTTP verbs, unless you are building a RESTful API. If you just want to display a page, then use GET, if you are submitting a form, sending a file then use POST.

    Here is a typical workflow for adding new routes to your application. Let’s say we are building a page that lists all books from database.

    Step 1. Start by defining a route.

    app.get('/books', bookController.getBooks);

    Note: As of Express 4.x you can define you routes like so:

    app.route('/books')
      .get(bookController.getBooks)
      .post(bookController.createBooks)
      .put(bookController.updateBooks)
      .delete(bookController.deleteBooks)

    And here is how a route would look if it required an authentication and an authorization middleware:

    app.route('/api/twitter')
      .all(passportConfig.isAuthenticated)
      .all(passportConfig.isAuthorized)
      .get(apiController.getTwitter)
      .post(apiController.postTwitter)

    Use whichever style that makes sense to you. Either one is acceptable. I really think that chaining HTTP verbs on app.route is very clean and elegant approach, but on the other hand I can no longer see all my routes at a glance when you have one route per line.

    Step 2. Create a new schema and a model Book.js inside the models directory.

    const mongoose = require('mongoose');
    
    const bookSchema = new mongoose.Schema({
      name: String
    });
    
    const Book = mongoose.model('Book', bookSchema);
    module.exports = Book;

    Step 3. Create a new controller file called book.js inside the controllers directory.

    /**
     * GET /books
     * List all books.
     */
    const Book = require('../models/Book.js');
    
    exports.getBooks = (req, res) => {
      Book.find((err, docs) => {
        res.render('books', { books: docs });
      });
    };

    Step 4. Import that controller in app.js.

    const bookController = require('./controllers/book');

    Step 5. Create books.pug template.

    extends layout
    
    block content
      .page-header
        h3 All Books
    
      ul
        for book in books
          li= book.name

    That’s it! I will say that you could have combined Step 1, 2, 3 as following:

    app.get('/books',(req, res) => {
      Book.find((err, docs) => {
        res.render('books', { books: docs });
      });
    });

    Sure, it’s simpler, but as soon as you pass 1000 lines of code in app.js it becomes a little difficult to navigate the file. I mean, the whole point of this boilerplate project was to separate concerns, so you could work with your teammates without running into MERGE CONFLICTS. Imagine you have 4 developers working on a single app.js, I promise you it won’t be fun resolving merge conflicts all the time. If you are the only developer then it’s fine. But as I said, once it gets up to a certain LoC size, it becomes difficult to maintain everything in a single file.

    That’s all there is to it. Express.js is super simple to use. Most of the time you will be dealing with other APIs to do the real work: Mongoose for querying database, socket.io for sending and receiving messages over websockets, sending emails via Nodemailer, form validation using express-validator library, parsing websites using Cheerio, and etc.


    How do I use Socket.io with Hackathon Starter?

    Dan Stroot submitted an excellent pull request that adds a real-time dashboard with socket.io. And as much as I’d like to add it to the project, I think it violates one of the main principles of the Hackathon Starter:

    When I started this project, my primary focus was on simplicity and ease of use. I also tried to make it as generic and reusable as possible to cover most use cases of hackathon web apps, without being too specific.

    When I need to use socket.io, I really need it, but most of the time – I don’t. But more importantly, websockets support is still experimental on most hosting providers. As of October 2013, Heroku supports websockets, but not until you opt-in by running this command:

    heroku labs:enable websockets -a myapp

    And what if you are deploying to OpenShift? They do support websockets, but it is currently in a preview state. So, for OpenShift you would need to change the socket.io connect URI to the following:

    const socket = io.connect('http://yoursite-namespace.rhcloud.com:8000');

    Wait, why is it on port 8000? Who knows, and if I didn’t run across this blog post I wouldn’t even know I had to use port 8000.

    I am really glad that Heroku and OpenShift at least have a websockets support, because many other PaaS providers still do not support it. Due to the aforementioned issues with websockets, I cannot include socket.io as part of the Hackathon Starter. For now… If you need to use socket.io in your app, please continue reading.

    First you need to install socket.io:

    npm install socket.io --save

    Replace const app = express(); with the following code:

    const app = express();
    const server = require('http').Server(app);
    const io = require('socket.io')(server);

    I like to have the following code organization in app.js (from top to bottom): module dependencies, import controllers, import configs, connect to database, express configuration, routes, start the server, socket.io stuff. That way I always know where to look for things.

    Add the following code at the end of app.js:

    io.on('connection', (socket) => {
      socket.emit('greet', { hello: 'Hey there browser!' });
      socket.on('respond', (data) => {
        console.log(data);
      });
      socket.on('disconnect', () => {
        console.log('Socket disconnected');
      });
    });

    One last thing left to change:

    app.listen(app.get('port'), () => {

    to

    server.listen(app.get('port'), () => {

    At this point we are done with the back-end.

    You now have a choice – to include your JavaScript code in Pug templates or have all your client-side JavaScript in a separate file – in main.js. I will admit, when I first started out with Node.js and JavaScript in general, I placed all JavaScript code inside templates because I have access to template variables passed in from Express right then and there. It’s the easiest thing you can do, but also the least efficient and harder to maintain. Since then I almost never include inline JavaScript inside templates anymore.

    But it’s also understandable if you want take the easier road. Most of the time you don’t even care about performance during hackathons, you just want to “get shit done” before the time runs out. Well, either way, use whichever approach makes more sense to you. At the end of the day, it’s what you build that matters, not how you build it.

    If you want to stick all your JavaScript inside templates, then in layout.pug – your main template file, add this to head block.

    script(src="https://github.com/socket.io/socket.io.js")
    script.
        let socket = io.connect(window.location.href);
        socket.on('greet', function (data) {
          console.log(data);
          socket.emit('respond', { message: 'Hey there, server!' });
        });

    Note: Notice the path of the socket.io.js, you don’t actually have to have socket.io.js file anywhere in your project; it will be generated automatically at runtime.

    If you want to have JavaScript code separate from templates, move that inline script code into main.js, inside the $(document).ready() function:

    $(document).ready(function() {
    
      // Place JavaScript code here...
      let socket = io.connect(window.location.href);
      socket.on('greet', function (data) {
        console.log(data);
        socket.emit('respond', { message: 'Hello to you too, Mr.Server!' });
      });
    
    });

    And we are done! Cheatsheets

    ES6 Cheatsheet

    Declarations

    Declares a read-only named constant.

    const name = 'yourName';

    Declares a block scope local variable.

    let index = 0;

    Template Strings

    Using the `${}` syntax, strings can embed expressions.

    const name = 'Oggy';
    const age = 3;
    
    console.log(`My cat is named ${name} and is ${age} years old.`);

    Modules

    To import functions, objects or primitives exported from an external module. These are the most common types of importing.

    const name = require('module-name');
    const { foo, bar } = require('module-name');

    To export functions, objects or primitives from a given file or module.

    module.exports = { myFunction };
    module.exports.name = 'yourName';
    module.exports = myFunctionOrClass;

    Spread Operator

    The spread operator allows an expression to be expanded in places where multiple arguments (for function calls) or multiple elements (for array literals) are expected.

    myFunction(...iterableObject);
    <ChildComponent {...this.props} />

    Promises

    A Promise is used in asynchronous computations to represent an operation that hasn’t completed yet, but is expected in the future.

    var p = new Promise(function(resolve, reject) { });

    The catch() method returns a Promise and deals with rejected cases only.

    p.catch(function(reason) { /* handle rejection */ });

    The then() method returns a Promise. It takes 2 arguments: callback for the success & failure cases.

    p.then(function(value) { /* handle fulfillment */ }, function(reason) { /* handle rejection */ });

    The Promise.all(iterable) method returns a promise that resolves when all of the promises in the iterable argument have resolved, or rejects with the reason of the first passed promise that rejects.

    Promise.all([p1, p2, p3]).then(function(values) { console.log(values) });

    Arrow Functions

    Arrow function expression. Shorter syntax & lexically binds the this value. Arrow functions are anonymous.

    singleParam => { statements }
    () => { statements }
    (param1, param2) => expression
    const arr = [1, 2, 3, 4, 5];
    const squares = arr.map(x => x * x);

    Classes

    The class declaration creates a new class using prototype-based inheritance.

    class Person {
      constructor(name, age, gender) {
        this.name   = name;
        this.age    = age;
        this.gender = gender;
      }
    
      incrementAge() {
        this.age++;
      }
    }

    🎁 Credits: DuckDuckGo and @DrkSephy.

    🔝 back to top

    JavaScript Date Cheatsheet

    Unix Timestamp (seconds)

    Math.floor(Date.now() / 1000);
    moment().unix();
    

    Add 30 minutes to a Date object

    var now = new Date();
    now.setMinutes(now.getMinutes() + 30);
    moment().add(30, 'minutes');
    

    Date Formatting

    // DD-MM-YYYY
    var now = new Date();
    
    var DD = now.getDate();
    var MM = now.getMonth() + 1;
    var YYYY = now.getFullYear();
    
    if (DD < 10) {
      DD = '0' + DD;
    }
    
    if (MM < 10) {
      MM = '0' + MM;
    }
    
    console.log(MM + '-' + DD + '-' + YYYY); // 03-30-2016
    console.log(moment(new Date(), 'MM-DD-YYYY'));
    
    // hh:mm (12 hour time with am/pm)
    var now = new Date();
    var hours = now.getHours();
    var minutes = now.getMinutes();
    var amPm = hours >= 12 ? 'pm' : 'am';
    
    hours = hours % 12;
    hours = hours ? hours : 12;
    minutes = minutes < 10 ? '0' + minutes : minutes;
    
    console.log(hours + ':' + minutes + ' ' + amPm); // 1:43 am
    console.log(moment(new Date(), 'hh:mm A'));
    

    Next week Date object

    var today = new Date();
    var nextWeek = new Date(today.getTime() + 7 * 24 * 60 * 60 * 1000);
    moment().add(7, 'days');
    

    Yesterday Date object

    var today = new Date();
    var yesterday = date.setDate(date.getDate() - 1);
    moment().add(-1, 'days');
    

    🔝 back to top

    Mongoose Cheatsheet

    Find all users:

    User.find((err, users) => {
      console.log(users);
    });

    Find a user by email:

    let userEmail = 'example@gmail.com';
    User.findOne({ email: userEmail }, (err, user) => {
      console.log(user);
    });

    Find 5 most recent user accounts:

    User
      .find()
      .sort({ _id: -1 })
      .limit(5)
      .exec((err, users) => {
        console.log(users);
      });

    Get total count of a field from all documents:

    Let’s suppose that each user has a votes field and you would like to count the total number of votes in your database across all users. One very inefficient way would be to loop through each document and manually accumulate the count. Or you could use MongoDB Aggregation Framework instead:

    User.aggregate({ $group: { _id: null, total: { $sum: '$votes' } } }, (err, votesCount)  => {
      console.log(votesCount.total);
    });

    🔝 back to top

    Docker

    You will need docker and docker-compose installed to build the application.

    After installing docker, start the application with the following commands :

    # To build the project for the first time or when you add dependencies
    docker-compose build web
    
    # To start the application (or to restart after making changes to the source code)
    docker-compose up web
    
    

    To view the app, find your docker ip address + port 8080 ( this will typically be http://localhost:8080/ ). To use a port other than 8080, you would need to modify the port in app.js, Dockerfile and docker-compose.yml.

    Deployment

    Once you are ready to deploy your app, you will need to create an account with a cloud platform to host it. These are not the only choices, but they are my top picks. From my experience, Heroku is the easiest to get started with, it will automatically restart your Node.js process when it crashes, zero-downtime deployments and custom domain support on free accounts. Additionally, you can create an account with MongoDB Atlas and then pick one of the 4 providers below. Again, there are plenty of other choices and you are not limited to just the ones listed below.

    1-Step Deployment with Heroku

    • Download and install Heroku Toolbelt
    • In terminal, run heroku login and enter your Heroku credentials
    • From your app directory run heroku create
    • Run heroku addons:create mongolab. This will set up the mLab add-on and configure the MONGODB_URI environment variable in your Heroku app for you.
    • Lastly, do git push heroku master. Done!

    Note: To install Heroku add-ons your account must be verified.


    • Go to https://www.mongodb.com/cloud/atlas
    • Click the green Get started free button
    • Fill in your information then hit Get started free
    • You will be redirected to Create New Cluster page.
    • Select a Cloud Provider and Region (such as AWS and a free tier region)
    • Select cluster Tier to Free Shared Clusters
    • Give Cluster a name (default: Cluster0)
    • Click on green ⚡Create Cluster button
    • Now, to access your database you need to create a DB user. To create a new MongoDB user, from the Clusters view, select the Security tab
    • Under the MongoDB Users tab, click on +Add New User
    • Fill in a username and password and give it either Atlas Admin User Privilege
    • Next, you will need to create an IP address whitelist and obtain the connection URI. In the Clusters view, under the cluster details (i.e. SANDBOX – Cluster0), click on the CONNECT button.
    • Under section (1) Check the IP Whitelist, click on ALLOW ACCESS FROM ANYWHERE. The form will add a field with 0.0.0.0/0. Click SAVE to save the 0.0.0.0/0 whitelist.
    • Under section (2) Choose a connection method, click on Connect Your Application
    • In the new screen, click on Standard connection string (3.4+ driver). WARNING: Do not pick 3.6+ since there Express Session currently has a compatibility issue with it.
    • Finally, copy the URI connection string and replace the URI in MONGODB_URI of .env.example with this URI string. Make sure to replace the with the db User password that you created under the Security tab.
    • Note that after some of the steps in the Atlas UI, you may see a banner stating We are deploying your changes. You will need to wait for the deployment to finish before using the DB in your application.

    Note: As an alternative to MongDB Atlas, there is also Compose.

    **NOTE** *These instructions might be out of date due to changes in OpenShift. Heroku is currently a good free alternative. If you the new process, please feel free to help us update this page*
    • First, install this Ruby gem: sudo gem install rhc 💎
    • Run rhc login and enter your OpenShift credentials
    • From your app directory run rhc app create MyApp nodejs-0.10
    • Note: MyApp is the name of your app (no spaces)
    • Once that is done, you will be provided with URL, SSH and Git Remote links
    • Visit provided URL and you should see the Welcome to your Node.js application on OpenShift page
    • Copy and and paste Git Remote into git remote add openshift YOUR_GIT_REMOTE
    • Before you push your app, you need to do a few modifications to your code

    Add these two lines to app.js, just place them anywhere before app.listen():

    var IP_ADDRESS = process.env.OPENSHIFT_NODEJS_IP || '127.0.0.1';
    var PORT = process.env.OPENSHIFT_NODEJS_PORT || 8080;

    Then change app.listen() to:

    app.listen(PORT, IP_ADDRESS,() => {
      console.log(`Express server listening on port ${PORT} in ${app.settings.env} mode`);
    });

    Add this to package.json, after name and version. This is necessary because, by default, OpenShift looks for server.js file. And by specifying supervisor app.js it will automatically restart the server when node.js process crashes.

    "main": "app.js",
    "scripts": {
      "start": "supervisor app.js"
    },
    • Finally, you can now push your code to OpenShift by running git push -f openshift master
    • Note: The first time you run this command, you have to pass -f (force) flag because OpenShift creates a dummy server with the welcome page when you create a new Node.js app. Passing -f flag will override everything with your Hackathon Starter project repository. Do not run git pull as it will create unnecessary merge conflicts.
    • And you are done!

    **NOTE** *Beyound the initial 12 month trial of Azure, the platform does not seem to offer a free tier for hosting NodeJS apps. If you are looking for a free tier service to host your app, Heroku might be a better choice at this point*
    • Login to Windows Azure Management Portal
    • Click the + NEW button on the bottom left of the portal
    • Click COMPUTE, then WEB APP, then QUICK CREATE
    • Enter a name for URL and select the datacenter REGION for your web site
    • Click on CREATE WEB APP button
    • Once the web site status changes to Running, click on the name of the web site to access the Dashboard
    • At the bottom right of the Quickstart page, select Set up a deployment from source control
    • Select Local Git repository from the list, and then click the arrow
    • To enable Git publishing, Azure will ask you to create a user name and password
    • Once the Git repository is ready, you will be presented with a GIT URL
    • Inside your Hackathon Starter directory, run git remote add azure [Azure Git URL]
    • To push your changes simply run git push azure master
    • Note: You will be prompted for the password you created earlier
    • On Deployments tab of your Windows Azure Web App, you will see the deployment history

    IBM Bluemix Cloud Platform

    NOTE At this point it appears that Bluemix’s free tier to host NodeJS apps is limited to 30 days. If you are looking for a free tier service to host your app, Heroku might be a better choice at this point

    1. Create a Bluemix Account

      Sign up for Bluemix, or use an existing account.

    2. Download and install the Cloud Foundry CLI to push your applications to Bluemix.

    3. Create a manifest.yml file in the root of your application.

    applications:
    - name:      <your-app-name>
      host:      <your-app-host>
      memory:    128M
      services:
      - myMongo-db-name
    

    The host you use will determinate your application url initially, e.g. <host>.mybluemix.net. The service name ‘myMongo-db-name’ is a declaration of your MongoDB service. If you are using other services like Watson for example, then you would declare them the same way.

    1. Connect and login to Bluemix via the Cloud-foundry CLI
    $ cf login -a https://api.ng.bluemix.net
    
    1. Create a MongoDB service
    $ cf create-service mongodb 100 [your-service-name]
    

    Note: this is a free and experiment verion of MongoDB instance. Use the MongoDB by Compose instance for production applications:

    $ cf create-service compose-for-mongodb Standard [your-service-name]'
    
    1. Push the application

      $ cf push
      
      $ cf env <your-app-name >
      (To view the *environment variables* created for your application)
      
      

    Done, now go to the staging domain(<host>.mybluemix.net.) and see your app running.

    Cloud Foundry Commands More Bluemix samples Simple ToDo app in a programming language of your choice

    IBM Watson

    Be sure to check out the full list of Watson services to forwarder enhance your application functionality with a little effort. Watson services are easy to get going, it is simply an RESTful API call. Here is an example of a Watson Toner Analyzer to understand the emotional context of a piece of text that you send to Watson.

    Watson catalog of services

    Conversation – Quickly build and deploy chatbots and virtual agents across a variety of channels, including mobile devices, messaging platforms, and even robots.

    Discovery – Unlock hidden value in data to find answers, monitor trends and surface patterns with the world’s most advanced cloud-native insight engine.

    Language Translator – Translate text from one language to another.

    Natural Language Classifier – Interpret and classify natural language with confidence.

    Natural Language Understanding – Analyze text to extract meta-data from content such as concepts, entities, keywords and more.

    Personality Insights – Predict personality characteristics, needs and values through written text.

    Speech to Text – Convert audio and voice into written text for quick understanding of content.

    Text to Speech – Convert written text into natural sounding audio in a variety of languages and voices.

    Tone Analyzer – Understand emotions, social tendencies and perceived writing style.

    Visual Recognition – Tag, classify and search visual content using machine learning.

    Click here for live demos of each Watson service.


    Google Cloud Platform

    • Download and install Node.js

    • Select or create a Google Cloud Platform Console project

    • Enable billing for your project (there’s a $300 free trial)

    • Install and initialize the Google Cloud SDK

    • Create an app.yaml file at the root of your hackathon-starter folder with the following contents:

      runtime: nodejs
      env: flex
      manual_scaling:
        instances: 1
    • Make sure you’ve set MONGODB_URI in .env.example

    • Run the following command to deploy the hackathon-starter app:

      gcloud app deploy
    • Monitor your deployed app in the Cloud Console

    • View the logs for your app in the Cloud Console

    Changelog

    You can find the changelog for the project in: CHANGELOG.md

    Contributing

    If something is unclear, confusing, or needs to be refactored, please let me know. Pull requests are always welcome, but due to the opinionated nature of this project, I cannot accept every pull request. Please open an issue before submitting a pull request. This project uses Airbnb JavaScript Style Guide with a few minor exceptions. If you are submitting a pull request that involves Pug templates, please make sure you are using spaces, not tabs.

    License

    The MIT License (MIT)

    Copyright (c) 2014-2019 Sahat Yalkabov

    Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

    The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

    THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

    Visit original content creator repository https://github.com/sdm-org/cd-docker-30
  • msgpack23

    msgpack23

    Conan Center

    A modern, header-only C++ library for MessagePack serialization and deserialization.

    Overview

    msgpack23 is a lightweight library that provides a straightforward approach to serializing and deserializing C++ data structures into the MessagePack format. It is written in modern C++ (targeting C++20 and beyond) and leverages templates and type traits to provide a flexible, zero-dependency solution for packing and unpacking various data types.

    Key Features

    • Header-only: Simply include the header and start using it—no additional build steps or dependencies.
    • Modern C++: Uses C++ features like concepts to handle containers, maps, enums, time points, and user-defined types.
    • Extensible: Allows you to define custom types by implementing pack and unpack member functions, automatically integrating them into the serialization pipeline.
    • Collection and Map Support: Automatically detects and serializes STL containers (e.g., std::vector, std::map) without extra work.
    • Time Point Support: Native support for serializing std::chrono::time_point objects.
    • Variety of Primitive Types: Integers (signed/unsigned), booleans, floating-point, std::string, byte arrays, and nullptr are all supported out-of-the-box.
    • Endian-Aware: Properly handles endianness using std::endian and std::byteswap to ensure portability.

    Getting Started

    1. Clone the Repository

      git clone https://github.com/rwindegger/msgpack23.git
    2. Include the Header
      Since this is a header-only library, just include the main header in your project:

      #include "msgpack23.h"
    3. Pack and Unpack

      #include <iostream>
      #include <map>
      #include "msgpack23.hpp"
      
      int main() {
         // Create a map of some data
         std::map<std::string, int> original {{"apple", 1}, {"banana", 2}};
      
         // 1) Pack into a vector of std::byte
         std::vector<std::byte> packedData{};
         auto const inserter = std::back_insert_iterator(packedData);
         msgpack23::Packer packer{inserter};
         packer(original); 
      
         // 2) Unpack back into a map
         std::map<std::string, int> unpacked;
         msgpack23::Unpacker unpacker(packedData);
         unpacker(unpacked);
      
         // Verify the result
         for (auto const& [key, value] : unpacked) {
            std::cout << key << ": " << value << "\n";
         }
         return 0;
      }

    Custom Types

    To serialize your own types, define a pack and unpack function. The pack should accept a T & and the unpack should accept a T &.

    struct MyData {
       int64_t my_integer;
       std::string my_string;
       
       template<typename T>
       void pack(T &packer) const {
          packer(my_integer, my_string);
       }
       
       template<typename T>
       void unpack(T &unpacker) {
          unpacker(my_integer, my_string);
       }
    };

    Now you can use MyData with msgpack23 just like any built-in type:

    MyData const my_data {42, "Hello" };
    std::vector<std::byte> data{};
    auto const inserter = std::back_insert_iterator(data);
    msgpack23::pack(my_data);
    auto obj = msgpack23::unpack<MyData>(data);

    Why msgpack23?

    • Simplicity: A single header with clearly structured pack/unpack logic.
    • Performance: Minimal overhead by using direct memory operations and compile-time type deductions.
    • Flexibility: From primitive types and STL containers to custom structures, everything can be serialized with minimal boilerplate.

    Contributing

    Contributions, bug reports, and feature requests are welcome! Feel free to open an issue or submit a pull request.

    1. Fork it!
    2. Create your feature branch: git checkout -b feature/my-new-feature
    3. Commit your changes: git commit -am 'Add some feature'
    4. Push to the branch: git push origin feature/my-new-feature
    5. Submit a pull request

    License

    This project is licensed under the MIT License.


    Happy packing (and unpacking)! If you have any questions or feedback, please open an issue or start a discussion.

    Visit original content creator repository https://github.com/rwindegger/msgpack23
  • blockchain

    Blockchain

    My personal notes, example codes, best practices and sample projects.

    Author

    Aditya Hajare (Linkedin).

    Current Status

    WIP (Work In Progress)!

    License

    Open-sourced software licensed under the MIT license.


    Important Notes


    Blockchain Basics

    • It is a decentralized system. Decentralized means the network is powered by its users (Peers) without having any third party, central authority or middleman controlling it.
    • Every Peer has a record of the complete history of all transactions as well as the balance of every account.
    • This bookkeeping is not controlled by one party or a central authority (E.g. Central Bank).
    • Its all Public, and available in one digital ledger which is fully distributed across the network. i.e. everybody sees what everybody is doing.
    • The Blockchain acts as a public ledger.
    • In blockchain all the transactions are logged including:
      • Time
      • Date
      • Participants
      • Amount of every single transaction
    • Each node in the network owns the full copy of the blockchain.
    • The nodes automatically and continuously agree about the current state of the ledger and every transaction in it.
    • If anyone attempts to currupt a transaction, the nodes will not arrive at a consensus and hence will refuse to incorporate the transaction in the blockchain.
    • So every transaction is public and thousands of nodes unanimously agreed that a transaction has occurred on date X at time Y.
    • Everyone has access to shared single public source of truth.
    • Blockchain in first instance is all about optimizing Shared B2B Processes.

    Why Blockchain As Opposed To An Ordinary Database

    • In a nutshell, a Database solves a Data Problem whereas a Blockchain solves a Digital Asset Problem.
    • Database Solves a Data Problem:
      • Mature technology exists.
      • Centralized or Distributed.
      • Fit for purpose.
      • One party governs the data.
    • Blockchain solves a Digital Asset Problem:
      • Trade, trust and ownership.
      • Digital Assets.
      • Transactionality.
      • Multiple parties govern the data.
      • Remove intermediaries.
      • Solve double spend problem.
      • Need for time and trust in a network.
      • Gives birth to new business models.

    Key Concepts

    • Blockchain: It is a shared, replicated transaction system (Distributed Ledger) which is updated with the help of Smart Contracts and kept consistently synchronized through a process called Consensus. It is a append-only transaction system.
      • Distributed Ledger: It is a database that is concensually shared and synchronized across multiple sites, institutions or geaographies and it is accessible by multiple people.
      • Smart Contracts: It is a self-executing contract with the terms of the agreement between buyer and seller being directly written into lines of codes. The code and the agreement contained therein exists across a distributed, decentralized blockchain network.
      • Consensus: The majority of opinion, agreement amoung a group of people.
      • Private Blockchain vs. Public Blockchain: In public blockchain, anyone can send a transaction, while in private blockchain, only participants who are approved can send transactions.
      • Permissioned Blockchain vs. Permissionless Blockchain: Permissionless blockchain allow people to act anonymously (you do not know their identity), while in permissioned blockchain the identities of participants are known.
      • Examples of Public and Permissionless Blockchain: Bitcoin, Etherium.
      • Examples of Private and Permissioned Blockchain: Hyperledger Fabric, JP Morgan.
      • Cryptocurrencies (Bitcoin, Etherium etc.): Cryptocurrency is not same as a blockchain. Cryptocurrency use blockchain to store transactions. Most of the cryptocurrencies use their own type of blockchain with some aspects that make them unique and best for their own usecase.
    • Distributed Ledger:
      • Distributed Ledger platforms are used for tracking the State of an Asset.
      • An Asset is a digital representation of any real world thing.
      • So anything in a real world, whether it’s a tangible or intangible, that may be digitally represented, can be managed on a Distributed Ledger.
      • Tangible assets examples: Cars, Houses, etc.
      • Intangible assets examples: Stocks, Bonds, Certificates and any other kind of financial instruments.
    • Transactions:
      • The state of an Asset on the Distributed Ledger is managed by the way of Transactions. In other words, Transactions manages the States of an Asset.
      • Transaction represents the invocation of business logic that changes or manages the State of the Assets on the Distributed Ledger platform.
      • Chaincode encapsulates the business logic.
      • All Transactions are recorded in the Ledger.
      • The recorded Transactions are immutable. i.e. they cannot be updated or deleted.
    • Chaincode:
      • Chaincode implements the business logic and exposes the State management features by way of one or more functions.
      • The functions exposed by Chaincode are executed from the Applications by way of Transactions.
      • Not all of the Transactions lead to the creation of entry in Ledger. Some Transactions are performed to read State of an Asset in the Ledger.
      • Fabric Chaincode can be developed in Golang, NodeJS or Java.
      • Any real world asset that can be degitized can be represented as a Model in the Chaincode.
      • Each of the chaincode in the network is identified by a name which is unique across Channel.
      • Instantiation Policy determines which Org can Endorse the Transaction for creation of the Chaincode.
      • Peers receive the chaincode package in the instantiate transaction and hence the installation of chaincode on peer is automatic: FALSE.
      • Shim API is used for coding the Chaincode.
      • Not all Orgs need to install Chaincode. Following entities has the Chaincode:
        • Orgs participating in the transactions.
        • Orgs that will query the ledgers.
        • Chaincode Endorsers
      • Important points about Chaincode:
        • Chaincode may implement multiple Smart Contracts.
        • Chaincode is packaged in standard Tar file format.
        • Installation generates the Package-ID.
        • Each Org approves the specific Package for their Org.
        • Package-ID may be different across Orgs.
        • Whenever a Chaincode is approved by an Org, the Approval is added as a Transaction into the Ledger of that Org.
      • By default State Data is setup in LevelDB but we can configure Peer to store State Data in CouchDB instead of LevelDB.
      • CouchDB is a NoSQL database which can be setup as a State Data store for Peer node.
      • CouchDB allows to execute rich queries against State Data whereas LevelDB doesn’t allow it.
      • Each Chaincode owns the State Data it manages.
      • Direct access to State Data from one Chaincode to another is not allowed. However, Chaincode can Invoke or Query other Chaincode to access it’s State Data.
      • Invoke Chaincode from another Chaincode:
        • Since the Chaincode is invoked locally, both the Chaincodes (caller and the called Chaincode) must be on the same Peer.
        • Transaction executes in the same Txn Context.
        • State changes on both the caller and the called Chaincode take effect only if they share a common Channel.
        • No State change takes place on the called Chaincode if the caller and a called Chaincodes are on different Channels.
        • Transaction message is passed from the caller Chaincode to the called Chaincode.
        • On common Channel: State changes take effect for both caller and a called Chaincode.
        • On different Channel: State changes take effect only for the caller Chaincode.
    • Channel Events:
      • Events are emitted by the Peers.
      • Applications may subscribe to the events.
        • Subscription is on Channel level. i.e. Subscription requires the specification of the Channel.
        • Subscribers can also specify the filter criteria on which the events are evaluated on the Peer.
      • There are 3 types of Events:
        • Block Event: Emitted by the Peer when a new Block is added by the Peer to the Ledger.
        • Transaction Event: Emitted by the Peer when a specified Transaction is received in a Block.
        • Chaincode Event: Emitted by the Chaincode. Chaincode Events are generated in response to the Transactions against the Chaincode.
      • Chaincode Events are not received by the subscriber when the Chaincode is executed by the Endorsers. The events are received by the subscriber when the Peer adds the Transaction to the Ledger.
      • The subscriber can provide the name of the Event in which they are interested.
      • The Transaction Event and Chaincode Events are in the Block payload.
    • Chaincode Lifecycle Endorsement Policy Rule:
      • Network members sets up the Lifecycle Endorsement Policy.
      • It decides:
        • How many approvals are needed for a commit to be successful.
        • It may have a rule that says some specific Organization must Endorse the Transaction for it can be committed successfully.
      • This Policy is embedded in the Channel Genesis.
      • Rules can be updated with Channel update transactions.
      • Lifecycle Endorsement Policy can be specified as ImplicitMeta Policy or a Signature Policy.
      • Default Policy is the Rule = "MEJORITY Endorsement". It means that More than HALF the members of the network MUST approve the Chaincode definition for it to be committed successfully.
      • The Policy rule Rule = "ANY Endorsement" requires Only one approval for committing the Chaincode.
      • With the help of Signature Policy, we can create complex/flexible expressions.
    • Peer Launch Modes:
      • Net mode: Chaincode instance launched by Peer. This is the Default mode.
        • Used in live network.
        • Chaincode Logs are written to the container’s file system.
      • Dev mode: Chaincode instance launched by Developer.
        • Development time only.
        • Chaincode Logs are written to the console.
        • No need to install/upgrade for changes.
        • To launch Peer in Dev mode:

        # 1. Launch peer in dev mode
        peer node start --peer-chaincodedev
        # 2. Install chaincode to Peer
        # 3. Run chaincode on Terminal/Shell
        # 4. Instantiate the chaincode to Peer
        • NOTE: Chaincode instance is not launched in the Docker container.
        • Helper scripts:

        ##### BOTH COMMANDS USE ENV VARS! #####
        
        # Builds the Golang chaincode
        /network/bin/cc-build.sh # go build $CC_PATH
        
        # Runs the Golang chaincode
        /network/bin/cc-run.sh # go run $GOPATH/src/$CC_PATH/*.go
        • Example commands:

        cd /network/bin/
        
        # Initialize dev environment in Dev mode.
        # -d: Dev mode
        ./dev-init.sh -d
        
        # Set env vars for "acme" Org
        source ./set-env.sh acme
        
        # Check/verify env vars for Org
        ./set-chain-env.sh
        
        # Package and install chaincode on Peer
        # -p: Package chaincode
        ./chain.sh install -p
        
        # In new terminal, ssh into vagrant
        
        # Set Org context in new terminal
        source set-env.sh acme
        
        # Start chaincode in Terminal mode
        cc-run.sh
        
        # In other terminal, we can run following commands
        chain.sh instantiate # To instantiate the chaincode
        chain.sh invoke # To invoke chaincode
        chain.sh query # To query chaincode
    • Client Side API:
      • Chaincode gets deployed on the Peer.
      • Applications use the Fabric Client SDK for interacting with the Chaincode.
      • There are 2 APIs that are used by the Applications.
        • Invoke API: Used for executing the business logic in the Chaincode by way of Transactions.
        • Query API: Used for reading the State of the Assets from Distributed Ledger platform.
      • Both Invoke API and Query API executes the functions exposed by the Chaincode.
    • Arguments Sent To Chaincode:
      • The Client Side API executes the functions by passing data in JSON format to the Chaincode.
      • The JSON object has a key called Args, which is set to an array of string types. The first element in the Args array is the function name which will be executed by the Chaincode in response of this API invocation on the client side. Rest of the arguments in the Args array are the parameters passed to the function called by Chaincode.

      {
          "Args": ["FunctionName", "Param1", "Param2", "Param..n"]
      }
    • Transaction Flow:
      • When client executes an Invoke API, the Transaction Proposal is created and it is sent to the Endorsing Peers. If everything is good with the proposed transaction, the Endorsing Peers Sign the Transaction Proposal and send it back to the client. The client then sends the Signed Transaction Proposal to the Orderer Service for including the transaction in the Block. Orderer Service at some point creates the Block and sends the Block to the Peers in the Network. Not all of the Peers have Chaincode installed on them. Those Peers will still receive the Block sent by Orderer Service.
      • When client executes a Query API, any of the Peer with a Chaincode installed on it, will execute the Chaincode Function and return response to the client. In this invocation, the client does not go through the Orderer Service.
    • Chaincode Interface:
      • All Golang Chaincode must implement following 3 functions:
        • Init(): Chaincode initialization logic. Called with invoke --is-init flag.
        • Invoke(): Contains business logic. Executed on Query and Invoke [Without init flag].
        • main(): Registers chaincode with the fabric runtime.
    • The Fabric Model: What makes fabric ideal as an enterprise blockchain solution?
      • Assets:
        • Asset definitions enable the exchange of almost anything with monetory value over the network. For e.g. Whole foods, antique cars, currency features, bonds, stocks, digital goods etc.
        • Asset within the network are represented as a collection of key-value pairs with state changes that records the transaction on the ledger or distributed ledger.
        • Assets can be represented in Binary and JSON format.
        • There are 2 types of Assets viz. Tangible Assets and Intangible Assets
          • Tangible Assets: Tangible assets are typically physical assets or properties owned by a company. For e.g. Computer equipment. Tangible assets are the main type of assets that companies use to produce their product and service.
          • Intangible Assets: Intangible Assets don’t exists physically, yet they have a monetory value since they represent potential revenue. For e.g. stock, bond, copyright of a song. The record company that owns the copyright would get paid a royalty each time the song is played.
      • Chaincode/Smart Contracts:
        • Chaincode contains the smart contracts.
        • Many times chaincodes and smart contracts are used interchangeably because in most cases they mean exactly the same.
        • Chaincode defines the asset and also enforces rules for interacting with the asset or any other information that is stored on the distributed ledger.
        • Chaincode functions execute against the ledger’s current state database and are initiated through transaction proposals.
        • Chaincode execution results in a set of key-value pairs which are also called a Right Set.
        • Right Set can be submitted to the network and thereby append to the ledger.
        • Chaincode execution is partitioned from transaction ordering, limiting the required level of trust and verification across node types, and optimizing network scalability and performance.
        • Chaincode or Smart Contracts define all the business logic.
        • Chaincode/Smart Contracts are stored on Peer nodes.
      • Ledger:
        • Ledger contains all of the state mutations or transactions. These state changes are produced by the invocation of chaincode.
        • The immutable, shared ledger encodes the entire transaction history for each channel, and includes SQL-like query capability for efficient auditing and dispute resolution.
        • Ledgers store all of the data.
        • Ledgers are stored on Peer nodes.
      • Privacy:
        • Channels and Private Data Collections enable private and confidential multi-lateral transactions that are usually required by competing businesses and regulated industries that exchange assets on a common network.
      • Security And Membership Services:
        • In Fabric Network, all participants have known identities.
        • Public key infrastructure is used to generate cryptographic cenrtificates. These certificates can be tied to an organization, a network component, a user or a client application. These certificates can be used to manage data access control.
        • Role based governing with the help of certificates is the thing which made Fabric Permissioned.
        • Permissioned membership provides a trusted blockchain network, where participants know that all transactions can be detected and traced by authorized regulators and auditors.
      • Consensus:
        • At a very high level, we can say Consensus Model has something to do with multiple participants agreeing on something.
        • A unique approch to Consensus enables the flexibility and scalability needed for the enterprise.
    • Identities:
      • Every actor in a network has a digital identity. It is represented by X-509 certificate.
      • Identities determine resources and access to information for actors.
      • This determination is done based on attribute called Principals in certificate.
      • The identity with attribute are called Principals. We can think of a Principal as some sort of a userid.
      • Identities are created by a trusted Certificate Authority. In fabric, we can use a Certificate Authority.
      • The process of handing out certificates is called PKI (Public Key Infrastructure). PKI provides a secure way of communication.
      • Just having certificate is not enough for any actor. We also need the network to acknowledge the certificate. For e.g. we need organization to say – “Yes this certificate belongs to my organization”. The Identity must be registered in the organization’s MSP (Membership Service Provider).
      • MSP (Membership Service Provider) turns verifiable identities into members of the blockchain network.
      • PKI (Public Key Infrastructure) is a collection of internet technologies that provides secure communication in the network.
      • PKI (Public Key Infrastructure):
        • Digital Certificates
        • Public and Private Keys
        • Certificate Authorities
        • Certificate Revocation Lists
      • MSP – For a member to have an access to the network, we need 4 things:
        • Have an identity issued by a CA that is trusted by the network.
        • Become member of an organization that is recognized and approved by the network members.
        • Add the MSP to either a consortium on the network or a channel.
        • Ensure the MSP is included in the policy definitions on the network.
    • Policies:
      • A policy is a set of rules that can define how a decision is made.
      • Policies describe a who and a what.
      • In Hyperledger, Policies are used for infrastructure management.
      • Uses of Policies in Hyperledger network:
        • Adding/Removing members from channel.
        • Change the structure of blocks.
        • Specify count of organizations for endorsement of transactions
      • How do we write Policy in Fabric:
        • Signature Policies:
          • Turns verifiable identities into members of a blockchain network.
          • <OR | AND | NOutOf>
        • ImplicitMeta Policies:
          • Only used for channel configuration.
          • <ANY | ALL | MAJORITY>
    • Peer:
      • Chaincodes/Smart Contracts and Ledgers are stored on Peer nodes that are owned by Organizations inside the network.
      • Peer nodes can host multiple instances of Chaincodes and Ledgers.
      • End-users communicate with the network by using applications that connect to the Peer nodes of their Organization.
      • Peers use Channels to interact with other network components.
      • All Peers belong to Organizations.
      • Peers have an Identity assigned to them via a digital certificate (x.509).
      • Single Peer itself cannot update information stored in ledger on it. Updating requires a consent of other Peers in network. The update transaction is done in 3 steps:
        • Step #1: Proposal.
          • Independently executed by Peers and returns Endorse Proposal responses.
        • Step #2: Ordering and Packaging transactions into blocks.
          • Orderer receives Endorsed transactions and then creates the blocks.
        • Step #3: Validation and Commit of the transaction.
          • When Peer receives a new block from the Orderer, the Peer processes the block resulting in a new block being added to the Ledger.
    • Ledger:
      • On the ledger, we are recording facts about current state of the object.
      • Change history in ledger is immutable.
      • In Fabric, ledger consists of 2 parts:
        • World State:
          • It is a database that holds the current value of the object.
          • It can change frequently.
        • Blockchain:
          • It records all the transactions for the object that together results in a Current World State.
          • Transactions are collected inside blocks that are appended to the blockchain.
          • Blockchain data structure is very different from World State. It’s immutable.
          • Blockchain does not use a database.
          • Blockchain is implemented as a file. The reason for this is because there are just few operations done on a Blockchain.
          • Primary operations of a Blockchain is to append data to it. And a file is perfect for that.
          • First block is called the Genesis Block. It does not contain transaction data. It contains configuration of a initial state of a network channel.
          • Blocks are connected together with the Header of a block.
          • Each block in blockchain has following structure:
            • Block Header: Contains following parts:
              • Block Number: Integer starting at 0 (Genesis Block), and increased by 1 for each new block.
              • Current Block Hash: Hash of all the transactions contained in the current block.
              • Previous Block Header Hash: Hash from the previous Block Header.
            • Block Data: Contains list of transactions arranged in order.
            • Block Meta-Data: Contains certificate and the signature of the block creato which is used to verify the block.
    • Orderer:
      • Orders transactions into blocks.
      • Maintains list of Organizations that can create channels. This list of Organizations is called the Consortium. Also, this list is stored in a System Channel (Channel for the Orderers).
      • Enforce access control for channels (Application Channels). This way they can restric user from Reading and Writing on a Channel.
      • Manage structure of the blocks.
      • We can tweak the structure of the blocks by setting the BatchSize and BatchTimeout parameters.
        • BatchSize: Maximum transaction count in one block.
        • BatchTimeout: Maximum time for a new block creation. This time is measured from the first transaction received in this new block.
      • Ordering service implementations:
        • Kafka (Deprecated since Fabric v2)
        • Solo (Deprecated since Fabric v2)
        • Raft (Recommended): It is a Crash Fault Tolerant (CFT) ordering service. It implements Leader-Follower model.
      • It is better to have multiple Orderer nodes that are owned by different Organizations. This way we make sure that even the ownership is decentralized.
      • Raft is a protocol for implementing distributed Consensus.
      • In Raft there are 2 timeout settings which control the elections:
        • Heartbeat Timeout
        • Election Timeout
      • Raft Election Process: It’s the amount of time a Follower waits until becoming a Candidate. After Election Timeout, the Follower becomes a Candidate and starts a new election term, votes for itself and sends out Request Vote messages to other nodes. If the receiving node hasn’t voted yet in this term then it votes for the Candidate and the node resets it’s Election Timeout. Once the Candidate has a majority of votes, it becomes a Leader. The Leader begins sending out Append Entries messages to it’s Followers. These messages are sent in intervals specified by the heartbeat timeout. Followers then respond to each Append Entries message. This election term will continue until a follower stops receiving heartbeats and becomes a candidate. If 2 nodes become candidate at the same time then a split vote can occur. Once we have a leader elected, we need to replicate all changes to our system to all nodes. This is done by using the same Append Entries message that was used for heartbeats. Raft can even stay consistent in the face of network partitions.
    • Channels:
      • Channels are created by creating the first Transaction and submitting the Transaction to the Ordering Service. This Channel creation Transaction specifies the initial configuration of the Channel and it is used by the Ordering Service to write the Channels Genesis Block.
      • We can use Configtxgen tool to create the first Transaction which will end up creating Channel and writing Genesis Block on that Channel.
      • The Configtxgen tool works by reading the network/configtx/configtx.yaml file which holds all of the configurations for the Channel. This file uses Channel Profiles.
      • Configtxgen:
        • Reads from the network/configtx/configtx.yaml file.
        • Can create a Configuration Transaction for the Application Channel.
        • Can create a Genesis Block for the System Channel.

    Blockchain Technology Benefits

    • Security:
      • Since Blockchain is a distributed concensus based architecture, it’s eliminates single point of failure and reduces the need for data intermediaries such as transfer agents or messaging system operators.
      • It helps prevent frauds and malicious third parties from doing bad things.
      • Its not fullproof and we do hear about hacks in the box or certainly in cryptocurrency or cryptocurrency exchanges but its very very difficult to hack or manipulate.
    • Transparency:
      • It provides transparency and in place multialise standards, protocols and shared processes.
    • Trust:
      • Its transparent and the immutable ledger makes it easy for different parties in a business network to collaborate, manage data and reach agreements.
    • Programmability:
      • Its programmable, so its able to execute things like smart contracts and help be more tamper proof and have deterministic software that automates the business logic.
      • We can code to address many different things from governance to compliance, regulatory compliance, data privacy, identity, looking at things like “know your customer” types things or anti money laundring attributes.
      • It manages that stakeholder participation, like for things like proxy voting.
    • Privacy:
      • It provides privacy.
    • High-Performance:
      • It can be a private network of hybrid networks and they are engineered to sustain hundreds/millions of transactions per second and also handle periodic surges in network activity.
    • Scalability:
      • Its highly scalable.
    • Authenticity and Scarcity:
      • Digitization really ensures data integrity and enables acid profits. It’s really a full transaction history in that single shared source of truth.
    • Streamlined Processes:
      • Since it can automate almost everything, it can enable some more real time settlements, auditing, reporting and reduces processing times and the potential for error and the delays due to number of steps and intermediaries required to achieve the same level of confidence.
    • Economic Benefits:
      • Reduced infrastructure, operational and transaction cost.
    • Market Reactivity:
      • It can be very reactive to market.

    Smart Contracts

    • Contracts:
      • A contract is formed when an offer by one party is accepted by the other party.
      • Consideration is the price paid for the promise of the other party. The price may not necessarily involve money.
        • For e.g. If you walk my dog, i will feed your cat.
        • For e.g. If you walk my dog, i will pay you 15 Rs.
    • Smart Contracts:
      Contract terms are agreed to ----> Smart Contract placed on the Blockchain ----> Triggering event causes contract to be automatically executed
      • Contract terms are agreed to: Hard coded and cannot be changed without both parties being aware.
      • Smart Contract placed on the Blockchain: Public viewed and verified.
      • Triggering event causes contract to be automatically executed: If/then statement coding.

    Development Environment Setup

    # Initialize vagrant and create Vagrantfile
    vagrant init
    
    # +-+-+-+-+-+-+-+-+ Configure Vagrantfile +-+-+-+-+-+-+-+-+
    # Checkout boxes at: https://vagrantcloud.com/search
    config.vm.box = "generic/ubuntu2010"
    
    # Setup network
    config.vm.network "private_network", ip "192.168.33.10"
    
    # Mount folder
    config.vm.synced_folder "./mount", "/home/vagrant/mount"
    
    # Configure memory
    config.vm.provider "virtualbox" do |vb|
        vb.gui = true
        vb.memory = "10000" #10 GB
    end
    # +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    
    # Bootup our virtual linux box
    vagrant up
    
    # SSH into our running virtual linux box
    vagrant ssh
    
    # +-+-+-+-+-+-+- Install HLF Pre-Requisites +-+-+-+-+-+-+-+
    # https://hyperledger-fabric.readthedocs.io/en/release-1.4/prereqs.html
    # install git
    sudo apt install  git -y
    
    # install curl
    sudo apt install curl -y
    
    # install docker
    sudo apt install build-essential -y
    sudo apt install apt-transport-https ca-certificates gnupg-agent software-properties-common -y
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo apt-key fingerprint 0EBFCD88
    sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose -y
    sudo usermod -aG docker $USER
    newgrp docker
    
    # install go
    curl -o "go.tar.gz" https://storage.googleapis.com/golang/go1.17.1.linux-amd64.tar.gz
    sudo tar -C /usr/local -xzf "go.tar.gz"
    
    export PATH=$PATH:/usr/local/go/bin
    source ~/.bashrc
    
    # install node
    curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -
    sudo apt install nodejs -y
    
    # install ohmyzsh
    # NOTE: Default password for root account on ubuntu: vagrant
    sudo apt install zsh -y
    sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
    • Install Samples, Binaries and Docker Images:

    # SSH into vagrant
    vagrant ssh
    
    # Go to /mount directory
    cd mount
    
    # Install samples, binaries and docker images
    curl -sSL https://bit.ly/2ysbOFE | bash -s -- 2.2.4 1.5.2
    • Install SSH plugin for VSCode.
    • While vagrant machine is up and running, execute following to get location of IdentityFile:
    vagrant ssh-config
    • Add new SSH host into VSCode SSH plugin:

    # Host can be found in Vagrantfile. Look for following line:
    # config.vm.network "private_network", ip: "192.168.33.10"
    ssh vagrant@192.168.33.10 -i "C:/Aditya/Projects/HLF/.vagrant/machines/default/virtualbox/private_key"
    • Add \bin to path:

    # In vagrant ssh, execute:
    export PATH=/home/vagrant/mount/fabric-samples/bin:$PATH

    CouchDB

    • We can use CouchDB as out State Database.
    • Ledger contains a Blockchain and a State Database. Blockchain is implemented as a file and a State Database is implemented as a database.
    • By default, we use a Go Level Database and this database is implemented in the Peer node.
    • LevelDB stores Chain Code data as key-value pairs. It supports queries based on key, key range and composite key.
    • As an alternative LevelDB, we can use CouchDB as a State Database of the Ledger.
    • With CouchDB:
      • We can store data as JSON objects.
      • We can write more complicated queries for retrieving specific data.
      • We can use Indexes for more efficient querying at larger data sets.
    • We need to decide which database we will be using as a State Database before setting up the network. Otherwise, we have to bring down the network, enable CouchDB and bring the network up again.
    • Each Peer has its own instance of the CouchDB.
    • No data replication at the CouchDB level.
    • Remote access is disabled to CouchDB.
    • CouchDB stores dates in format: YYYY-MM-DDThh:mm:ss.s.

    Private Data Collections

    • There are 3 levels of data privacy:
      • Channels
      • Private Data Collections
      • Encryption
    • Private Data Collections can provide privacy for subsets of Organizations within a Channel.
    • Private data collection consists of:
      • The private data itself
      • Hash of the private data
    • The Private data cannot be shared with the Ordering service.
    • The Private data is stored on a separate database. Nothing is stored in the State Database.
    • Peers that don’t have access to the Private Data Collections, don’t have any data on their Peer node.
    • Gossip Protocol is used for communication between Peer nodes. This is a reason why we need to connect at least one Peer node of the Organization to the Channel as an Anchor Peer. Because of this, Peers know of each other’s existence. Thats how we implement Peer to Peer communication using Gossip Protocol.
    • What can we do with PDC (Private Data Collection):
      • Use a corresponding public key for tracking public state.
      • Chaincode access control.
      • Sharing private data out of band.
      • Sharing private data with other collections.
      • Transferring private data to other collections.
      • Using private data for transaction approval.
      • Keeping transactors private.
    • Members on the Channel can restrict visibility of data.
    • Ledger is COMMON so all Transactions are still visible to all.
    • Transaction in Ledger has HASH of data that is stored in Private Data Collections.
    • Private Data Collection are configured using JSON at the time of Chaincode Instantiate or Upgrade.
    • Peers manages PDC data in a separate set of datastores.
    • PDC are isolated namespaces within Chaincode.
    • PDC Key-Values are accessible within Chaincode using Chaincode Stub API.
    • Policies control read access to the PDC.

    Range Queries

    • Range Queries require Start Key and the End Key.
    • Start Key is included in the result set.
    • End Key is exluced in the result set.
    • If Start Key and End Key are not specified i.e. Empty strings are provided, then result set will have all the Keys from state data set.
    • Keys are indexed in Lexical Order.
    • Maximum results returned in result set can be restricted by Peer Config.
    • Commonly used functions from Chaincode Stub API for executing range queries are GetStateByRange() and GetPrivateDataByRange().
    • Composite Key:
      • A key formed by combining 2 or more attributes of the record.
      • Uniqueness is guarenteen only if all parts of the key are used.
      • None of the Key attribute values in Composite Key can contain a null character. i.e. \0x00
      • When the PutState is executed, the index is created in the state database.
      • GetState() and GetStateByRange() functions do not support Partial Composite Keys.

    Rich Queries

    • Requires data to be modeled as JSON.
    • Peers need to use CouchDB.
    • Indexes need to be created for performance.
    • Restrictions on CouchDB JSON Documents:
      • Key cannot begin with underscore _.
      • Fields beginning with underscore _ are used internally.
      • ~version is a reserved field.
    • Rich queries for CouchDB are created using Mango Queries language.
    • Mango query language:
      • Declarative JSON query language.
      • Inspired by MongoDB query language.
      • Adopted by Cloudant and CouchDB.
    • Rich queries are not executed at the time of Validation. This may lead to inconsistent state of chaincode.
    • Do not use Rich Queries in Update Transactions (Invoke) unless we can guarentee No Phantom Reads.

    Asset History Logs

    • Fabric manages Assets in the Chaincode.
    • Invoke Transaction changes the State on per Key basis.
    • We can use Queries to get the Current State of the Asset.
    • Queries don’t provide history of the Transactions on specific Asset. i.e. Queries cannot get the Past State. for e.g. List all previous owners of the vehicle with id Vin#1000.
    • To get the history of Transactions on Asset, we use History Log and History API.
    • Asset History Log are managed on per Peer basis.
    • Peers on which the Chaincode needs access to the History Log must be configured to Create and Update these History Logs.
    • Peer once enabled for managing the History Logs, will create the log on per Chaincode or per Asset basis.
    • Asset History on Peer is managed in a separate datastore.
    • Regardless of what database is used as State Database, the Asset History is always managed in GoLevelDB database.
    • History Logs are setup at Peer level. They are not replicated by way of Gossip. Peer is responsible for managing the Asset Logs on it’s own.
    • History API is used from within the Chaincode to access history.
    • To enable history database, look for following setting in core.yaml:

    history:
        enableHistoryDatabase: true
    • Just like Rich Query API, the History API is also not re-executed in Validation phase of the Transaction. This may lead to inconsistent State of Chaincode.
    • Do not use History API in Update Transactions (Invoke) unless we can guarentee No Phantom Reads.

    Programmatic Access Control

    • Fabric uses PKI (Public Key Infrastructure) for Identity Management.
    • All users are issued a X509 certificate (Enrollment certificate). For e.g. Admin and other users in multiple roles.
    • All nodes in the network are also issued a X509 certificate. For e.g. Orderer, Peers, Client etc.
    • Authorization decision is made by the nodes based on the user’s Role and the Policies that have been set up for the network. For e.g. Only the identities assigned the Role of Admin can install and instantiate the chaincode.
    • At the network level, the Access and Authorization control is achieved by way of configuration. i.e. we have to declare the Policies that will drive the Access and Authorization at node level.
    • For building Access Control in Chaincode, check the Client Identity Chaincode Library: https://github.com/hyperledger/fabric-chaincode-go/tree/main/pkg/cid
    • Custom Attributes are added to the X509 certificates by the Registrar.
    • Client Identity Chaincode Library provides access to the Attributes or Identity set in X509 certificates.
    • Cryptogen tool does not support addition of Attributes in the certificates.
    • For additional Attributes support in X509 certificates, the Identities need to be setup with a fabric-ca.

    Fabric Node SDK

    • Module: fabric-network:
      • Gateway Class: Connection point for accessing the network.
      • Network Class: Represents set of Peers belonging to a network or the Application Channel.
      • Contract Class and Transaction Class: Exposes APIs (Invoke and Query) for Chaincode interactions. Transaction Class exposes additional APIs for providing finer control over Chaincode interactions.
      • Example flow: Pattern: Invoking and Querying Chaincode:
        • Application creates instance of Gateway Class using new operator.
        • Application then initializes the Gateway Instance with Wallet and the Connection Profile.
        • Wallet holds the credentials information for the user. Connection Profile is provided in the form of YAML or JSON file.
        • After initialization, the Application creates an instance of Network Class by invoking a function on Gateway Class.
        • Then, Application creates the instance of Contract Class by invoking a function on the Network Class instance.
        • Functions exposed by the Contract Class are then used for executing the Invoke and Query functions on the Chaincode.
      • fabric-network module exposes classes for managing Wallets.
    • Wallet:
      • A user may participate in multiple networks with different Roles.
      • In other words, a user may have multiple Identity Profiles to interact with different networks.
      • Wallet is construct that is used for managing these Identity Profiles.
      • Wallet contains one or more user Identity Context and each of these Identity Context or Identity Profiles have Certificates, Private Key and a Public Key.
      • Identities in Wallet are referred to by a Label which is a free format string and it is unique for each of the Identities managed in the Wallet.
      • Wallet interface:
        • InMemoryWallet: Manages identities in memory.
        • FileSystemWallet: Manages identities on user’s filesystem.
        • CouchDBWallet: Manages identities in a CouchDB Server.
    • Module: Client Class API:
      • Provides low level functions (via Channel Class) for interacting with the Peer and the Orderer.
      • Acts as a factory for Peers, Orderer, Channel and Chaincode classes.
      • Provides functions for managing the client instance configuration.
      • Manage Channel Update Transactions.
      • Instance is Stateful. i.e. We cannot re-use same instance against multiple Channel.
      • Channel related updates require multiple Admin signatures.
      • Channel Config update Tx submitted to Orderer.
      • Creates instances of Peer Class that are used for Peer queries.
      • Provides functions for managing Channel Configuration.
    • Module: Channel Class API:
      • Provides API for carrying out Channel Aware Queries and Channel Aware Tasks.
      • For e.g.
        • Create new Channel
        • Update existing Channel
        • Access to Ledger (Blk, Txn)
        • Joining Peer to the Channel
        • Instantiation, Invoke and Querying of the Chaincode
        • Getting finer control over flow of the Transaction
      • Channel class instance is initialized with Peers or Orderer on Channel.
      • Exposes functions for accessing the Channel Configuration.

    Common Errors

    • Error:
    zsh: ./networkdown.sh: bad interpreter: /bin/bash^M: no such file or directory
    • Solution:
    sed -i -e 's/\r$//' networkdown.sh

    • Error:

    Docker not installed
    # OR
    Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied
    • Solution:

    # ssh into vagrant box
    vagrant ssh
    
    # snap install docker
    sudo snap install docker
    
    # To fix permission issue
    # Change docker.sock file to be owned by "vagrant" user
    sudo chown vagrant /var/run/docker.sock

    • Minifabric: How to install, approve, commit and initialize chaincode in a single command?
    ./minifab install,approve,commit,initialize -n simple -v 2.0 -p '"init","Aditya","35","Nishigandha","30"'

    • Minifabric: How to update endorsement policy?

    ./minifab anchorupdate
    ./minifab discover # This will create a folder "discover" at "/vars/discover"
    ./minifab channelquery # This will create a channel config file at "/vars/channel1_config.json". Make the changes and save this file.
    
    # Do the channel update to apply new changes
    ./minifab channelsign,channelupdate

    Visit original content creator repository
    https://github.com/aditya43/blockchain

  • Inicial

    TP-INICIAL

    Trabajo Práctico N° 0: Inicial

    Problemas Secuenciales

    1. Suponga que un individuo desea invertir su capital en un banco y desea saber cuánto dinero ganará después de un mes si el banco paga a razón de 2% mensual.
    2. Una tienda ofrece un descuento del 15% sobre el total de la compra y un cliente desea saber cuánto deberá pagar finalmente por su compra.
    3. Un maestro desea saber qué porcentaje de hombres y que porcentaje de mujeres hay en un grupo de estudiantes.

    Problemas Condicionales Selectivos Simples

    1. Determinar si un alumno aprueba a reprueba un curso, sabiendo que aprobara si su promedio de tres calificaciones es mayor o igual a 7; reprueba en caso contrario.
    2. En un almacén se hace un 20% de descuento a los clientes cuya compra supere los $5000 ¿Cuál será la cantidad que pagara una persona por su compra?
    3. Un obrero necesita calcular su salario semanal, el cual se obtiene de la sig. manera: Si trabaja 40 horas o menos se le paga $300 por hora Si trabaja más de 40 horas se le paga $300 por cada una de las primeras 40 horas y $400 por cada hora extra.
    4. Desarrolle un algoritmo que lea dos números y los imprima en forma ascendente
    5. Hacer un algoritmo que calcule el total a pagar por la compra de camisas. Si se compran tres camisas o más se aplica un descuento del 20% sobre el total de la compra y si son menos de tres camisas un descuento del 10%

    Problemas Condicionales Selectivos Compuestos (si anidados o en cascada)

    1. Leer 2 números; si son iguales que los multiplique, si el primero es mayor que el segundo que los reste y si no que los sume.
    2. Leer tres números diferentes e imprimir el número mayor de los tres.

    Problemas con repeticiones

    1. Calcular el promedio de un alumno que tiene 7 calificaciones en la materia de Programación 1
    2. Leer 10 números y obtener su cubo y su cuarta.
    3. Leer 10 números e imprimir solamente los números positivos
    4. Leer 15 números negativos y convertirlos en positivos e imprimir dichos números.
    5. Suponga que se tiene un conjunto de calificaciones de un grupo de 40 alumnos. Realizar un algoritmo para calcular la calificación promedio y la calificación más baja de todo el grupo.
    6. Calcular e imprimir la tabla de multiplicar de un número cualquiera. Imprimir el multiplicando, el multiplicador y el producto.

    Visit original content creator repository
    https://github.com/jorgekaz/Inicial

  • mobile-carrier-bot

    mobile-carrier-bot

    Build Status

    A bot to access mobile carrier services, currently supports

    • Three IE
    • TIM
    • Iliad

    🚧🚧🚧🚧🚧🚧🚧🚧🚧🚧 ⚠️ Heavy Work in Progress ⚠️ 🚧🚧🚧🚧🚧🚧🚧🚧🚧🚧

    TODO (not in order):

    • skeleton, plugins, setup
    • architecture docs and diagrams
    • healtcheck status/info/env
    • expose prometheus metrics via endpoint
    • expose JVM metrics via JMX
    • scalatest and scalacheck
    • codecov or alternatives
    • telegram client (polling)
    • slack client (webhook)
    • scrape at least 2 mobile carrier services to check balance
    • (polling) notify for low credits and expiry date
    • in-memory db with Ref
    • doobie db with PostgreSQL and H2
    • if/how store credentials in a safe way
    • authenticated endpoints as alternative to telegram/slack
    • write pure FP lib alternative to scala-scraper and jsoup (I will never do this!)
    • fix scalastyle and scalafmt
    • slate static site for api
    • gitpitch for 5@4 presentation
    • constrain all types with refined where possible
    • travis
    • travis automate publish to dockerhub
    • publish to dockerhub
    • create deployment k8s chart
    • create argocd app
    • statefulset with PostgreSQL
    • alerting with prometheus to slack
    • grafana dashboard
    • backup/restore logs and metrics even if re-create cluster
    • generate and publish scaladoc
    • fix manual Circe codecs with withSnakeCaseMemberNames config
    • add gatling stress tests
    • add integration tests
    • manage secrets in k8s

    Endpoints

    # healt checks
    http :8080/status
    http :8080/info
    http :8080/env
    

    Development

    # test
    sbt test -jvm-debug 5005
    sbt "test:testOnly *HealthCheckEndpointsSpec"
    sbt "test:testOnly *HealthCheckEndpointsSpec -- -z statusEndpoint"
    
    # run with default
    TELEGRAM_API_TOKEN=123:xyz sbt app/run

    sbt aliases

    • checkFormat checks format
    • format formats sources
    • update checks outdated dependencies
    • build checks format and runs tests

    Other sbt plugins

    • dependencyTree shows project dependencies

    Deployment

    # build image
    sbt clean docker:publishLocal
    
    # run temporary container
    docker run \
      --rm \
      --name mobile-carrier-bot \
      niqdev/mobile-carrier-bot-app:0.1
    
    # access container
    docker exec -it mobile-carrier-bot bash
    
    # publish
    docker login
    docker tag niqdev/mobile-carrier-bot-app:0.1 niqdev/mobile-carrier-bot-app:latest
    docker push niqdev/mobile-carrier-bot-app:latest

    Charts

    # print chart
    helm template -f charts/app/values.yaml charts/app/
    
    # apply chart
    helm template -f charts/app/values.yaml charts/app/ | kubectl apply -f -
    
    # verify healtcheck
    kubectl port-forward deployment/<DEPLOYMENT_NAME> 8888:8080
    http :8888/status
    
    # logs
    kubectl logs <POD_NAME> -f
    Visit original content creator repository https://github.com/niqdev/mobile-carrier-bot
  • animated-lamp

    Telegram Bot For Screenshot Generation.

    Description

    An attempt to implement the screenshot generation of telegram files without downloading the entire file. Live version can be found here @screenshotit_bot.

    Installation Guide

    Prerequisites

    • FFmpeg.
    • Python3 (3.6 or higher).

    Local setup

    The setup given here is for a linux environment (Debian/Ubuntu).

    • Clone to local machine.

    $ git clone https://github.com/odysseusmax/animated-lamp.git
    $ cd animated-lamp
    • Create and activate virtual environment.

    $ python3 -m venv venv
    $ source venv/bin/activate
    
    • Install dependencies.
    $ pip3 install -U -r requirements.txt
    

    Environment Variables

    Properly setup the environment variables or populate config.py with the values. Setting up environment variables is advised as some of the values are sensitive data, and should be kept secret.

    • API_ID(required) – Get your telegram API_ID from https://my.telegram.org/.
    • API_HASH(required) – Get your telegram API_HASH from https://my.telegram.org/.
    • BOT_TOKEN(required) – Obtain your bot token from Bot Father.
    • LOG_CHANNEL(required) – Log channel’s id.
    • DATABASE_URL(required) – Mongodb database URI.
    • AUTH_USERS(required) – Admin(s) of the bot. User’s telegram id separated by space. Atleast one id should be specified.
    • HOST(required) – Public URL of file streaming service (See Setting up Streaming Service).
    • SESSION_NAME(optional) – Name you want to call your bot’s session, Eg: bot’s username.
    • MAX_PROCESSES_PER_USER(optional) – Number of parallel processes each user can have, defaults to 2.
    • MAX_TRIM_DURATION(optional) – Maximum allowed video trim duration in seconds. Defaults to 600s.
    • TRACK_CHANNEL(optional) – User activity tracking channel’s id. Only needed if you want to track and block any user. Disabled by default.
    • SLOW_SPEED_DELAY(optional) – Delay required between each interaction from users in seconds. Defaults to 5s.
    • TIMEOUT (optional) – Maximum time alloted to each process in seconds, after which process will be cancelled. Defaults to 1800s(30 mins).
    • DEBUG (optional) – Set some value to use DEBUG logging level. INFO by default.
    • IAM_HEADER (optional) – Authentication token for streaming service. Defaults to ''.
    • WORKER_COUNT (optional) – Number of process to be handled at a time. Defaults to 20.

    Run bot

    $ python3 -m bot

    Now go and /start the bot. If everything went right, bot will respond with welcome message.

    Setting up Streaming Service

    The streaming service can be a custom version of TgFileStream, modded to work with this setup. The mod basicaly is adding a type of header based authentication and changing the endpoints. The authentication part is optional and the endpoint used here is /file/:chat_id/:message_id. Make sure to note these changes when deploying your own instance. The streaming service used for @screenshotit_bot is not related to TgFileStream and I do not plan to make it OSS.

    Supported commands and functions

    Commands

    General commands

    • /start – Command to start bot or check whether bot is alive.
    • /settings – Command to configure bot’s behavior’
    • /set_watermark – Command to add custom watermark text to screenshots. Usage: /set_watermark watermark_text.

    Admin commands

    Any user specified in AUTH_USERS can use these commands.

    • /status – Returns number of total users.
    • /ban_user – Command to ban any user. Usage: /ban_user user_id ban_duration ban_reason. user_id – telegram id of the user, ban_duration – ban duration in days, ban_reason – reason for ban. All 3 parameters are required.
    • /unban_user – Command to unban any banned user. Usage: /unban_user user_id. user_id – telegram id of the user. The parameter is required.
    • /banned_users – Command to view all banned users. Usage: /banned_users. This takes no parameters.
    • /broadcast – Command to broadcast some message to all users. Usage: reply /broadcast to the message you want to broadcast.

    Functions

    • Screenshot Generation – Generates screenshots from telegram video files or streaming links. Number of screenshots range from 2-10.
    • Sample Video Generation – Generates sample video from telegram video files or streaming links. Video duration range from 30s to 150s. Configurable in /settings.
    • Video Trimming – Trims any telegram video files or streaming links.

    Settings

    In bot settings.

    • Upload Mode – Screenshot upload mode. Either as image file or as document file. Defaults to as image file.
    • Watermark – Watermark text to be embedded to screenshots. Texts upto 30 characters supported. Disabled by default.
    • Watermark Color – Font color to be used for watermark. Any of white, black, red, blue, green, yellow, orange, purple, brown, gold, silver, pink. Defaults to white.
    • Watermark Font Size – Font size to be used for watermarks. Any of small(30), medium(40), large(50). Defaults to medium.
    • Watermark Position – Watermark text’s position. Defaults to bottom left.
    • Sample Video Duration – Sample video’s duration. Any of 30s, 60s, 90s, 120s, 150s. Defaults to 30s.
    • Screenshot Genetation Mode – Either random or equally spaced. Defaults to equally spaced.

    Contributions

    Contributions are welcome.

    Contact

    You can contact me @odysseusmax.

    Thanks

    Thanks to Dan for his Pyrogram library.

    Thanks to Tulir Asokan for his TgFileStream Bot.

    Dependencies

    • pyrogram
    • tgcrypto
    • motor
    • dnspython
    • async-timeout
    • aiohttp

    License

    Code released under The GNU General Public License.

    Visit original content creator repository
    https://github.com/odysseusmax/animated-lamp

  • baskets

    Baskets

    coverage_badge

    A website to manage orders for local food baskets.

    Project built using Django, Bootstrap and JavaScript.

    Baskets screenshot

    Table of contents

    1. Background and goal
    2. Features
    3. Dependencies
    4. Run using Docker
    5. Populate dummy database
    6. Configure SMTP
    7. Tests run
    8. API Reference
    9. UI Language

    Background and goal

    This project has been developed to meet a real need for a local association.

    The aforementioned association centralizes orders for several local food producers. Thus, food baskets are delivered regularly to users.

    Before the deployment of this application, administrators got orders from users via SMS or email.

    Baskets app aims to save them time by gathering user orders in one unique tool.

    Payments are managed outside this application.

    Features

    User interface

    • Sign In page:
      • User account creation entering personal information and setting a password.
      • Passwords are validated to prevent weak passwords.
      • A verification email is sent to user with a link to a page allowing them to confirm their email address.
    • Sign Up page:
      • Users with verified email can log in using their email and password.
    • Next Orders page:
      • Shows the list of deliveries for which we can still order, in chronological order.
      • Clicking on each delivery opens a frame below showing delivery details: delivery date, last day to order and available products arranged by producer.
      • User can create one order per delivery.
      • Orders can be updated or deleted until their deadline.
    • Order history page:
      • Shows a list of user’s closed orders in reverse chronological order.
      • Clicking on each order will open its details below.
    • Password reset:
      • In “Login” page, a link allows users to request password reset entering their email address.
      • If an account exists for that email address, an email is sent with a link to a page allowing to set a new password.
    • Profile page:
      • Clicking on username loads a page where users can view and update its profile information.
    • Contact us page:
      • A link on footer loads a page with a contact form. The message will be sent to all staff members.

    All functionalities except “contact” requires authentication.

    Admin interface

    Users with both “staff” and “superuser” status can access admin interface.

    • Users page:
      • Manage each user account: activate/deactivate, set user groups and set staff status.
    • Groups page:
      • Manage groups.
      • Email all group users via a link.
    • Producers page:
      • Manage producers and its products (name and unit price).
      • Deactivate whole producer or single product:
        • Deactivated products won’t be available for deliveries.
        • If a product with related opened order items is deactivated, those items will be removed and a message will be shown to email affected users.
      • Export .xlsx file containing recap of monthly quantities ordered for each product (one sheet per producer).
      • If a product has related opened order items and its unit price changes, related opened orders will be updated and a message will be shown to email affected users.
    • Deliveries page:
      • Create/update deliveries, setting its date, order deadline, available products and optional message.
        • If “order deadline” is left blank, it will be set to ORDER_DEADLINE_DAYS_BEFORE before delivery date.
      • View total ordered quantity for each product to notify producers. A link allows seeing all related Order Items.
      • If a product is removed from an opened delivery, related opened orders will be updated and a message will be shown to email affected users.
      • In “Deliveries list” page:
        • View “number of orders” for each delivery, which links to related orders.
        • Export order forms:
          • Once a delivery deadline is passed, a link will be shown to download delivery order forms in xlsx format.
          • The file will contain one sheet per order including user information and order details.
        • Action to email users having ordered for selected deliveries.
    • Orders page:
      • View user orders and, if necessary, create and update them.
      • In “Orders list” page:
        • Export .xlsx file containing recap of monthly order amounts per user.
        • If one or several orders are deleted, a message will be shown to email affected users.

    Other

    • Mobile-responsiveness: This has been achieved using Bootstrap framework for user interface. Moreover, Django admin interface is also mobile responsive.
    • API: User orders can be managed using an API. See API reference for further details.
    • UI Translation: Translation strings have been used for all UI text to facilitate translation. See UI Language for further details.

    Dependencies

    In addition to Django, the following libraries have been used:

    Required versions can be seen in requirements (pip) or Pipfile (pipenv).

    Run using Docker

    $ git clone https://github.com/daniel-ob/baskets.git
    $ cd baskets
    

    Then run:

    $ docker compose up -d
    

    And finally, create a superuser (for admin interface):

    $ docker compose exec web python manage.py createsuperuser
    

    Please note that, for simplicity, console email backend is used by default for email sending, so emails will be written to stdout.

    Populate dummy database

    docker exec baskets-web sh -c "python manage.py shell < populate_dummy_db.py"
    

    Configure SMTP

    • Change backend on config/settings.py:
    EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
    
    • Set SMTP server config on .envs/.local/.web:
    # SMTP server config (if used)
    EMAIL_HOST=
    EMAIL_HOST_PASSWORD=
    EMAIL_HOST_USER=
    EMAIL_PORT=
    EMAIL_USE_TLS=
    

    Tests run

    Be sure you have ChromeDriver installed to run Selenium tests.

    First launch db container:

    $ docker compose up -d db
    

    Then open virtual environment and install all dependencies:

    $ pipenv shell
    (baskets)$ pipenv install --dev
    

    Finally, run all tests:

    (baskets)$ python manage.py test
    

    To run only functional tests:

    (baskets)$ python manage.py test baskets.tests.test_functional
    

    API Reference

    A Postman collection to test the API can be found here.

    Browsable API

    If settings.DEBUG is set to True, browsable API provided by REST framework can be visited on http://127.0.0.1:8000/api/v1/

    API Authentication

    All API endpoints requires token authentication.

    JWT token pair can be requested on /api/token/ providing username and password (request Body form-data). This returns access and refresh tokens.

    To authenticate requests, access token must be added to headers:

    Authorization: Bearer {{access_token}}
    

    When expired, access token can be refreshed on /api/token/refresh/ providing refresh token.

    List open deliveries

    List deliveries for which we can still order.

    GET /api/v1/deliveries/
    

    Response

     Status: 200 OK
    
    [
        {
            "url": "http://127.0.0.1:8000/api/v1/deliveries/3/",
            "date": "2023-06-27",
            "order_deadline": "2023-06-23"
        },
        {
            "url": "http://127.0.0.1:8000/api/v1/deliveries/2/",
            "date": "2023-07-04",
            "order_deadline": "2023-06-30"
        }
    

    Get delivery detail

    GET /api/v1/deliveries/{delivery_id}/
    

    Response

     Status: 200 OK
    
    {
        "id": 2,
        "date": "2023-05-30",
        "order_deadline": "2023-05-25",
        "products_by_producer": [
            {
                "name": "producer1",
                "products": [
                    {
                        "id": 1,
                        "name": "Eggs (6 units)",
                        "unit_price": "2.00"
                    },
                ]
            },
            {
                "name": "producer2",
                "products": [
                    {
                        "id": 2,
                        "name": "Big vegetables basket",
                        "unit_price": "1.15"
                    }
                ]
            }
        ],
        "message": "This week meat producer is on vacation",
    }
    

    List user orders

    GET /api/v1/orders/
    

    Response

     Status: 200 OK
    
    [
        {
            "url": "http://127.0.0.1:8000/api/v1/orders/30/",
            "delivery": {
                "url": "http://127.0.0.1:8000/api/v1/deliveries/2/",
                "date": "2023-07-04",
                "order_deadline": "2023-06-30"
            },
            "amount": "220.00",
            "is_open": true
        }
    ]
    

    Get order detail

    GET /api/v1/orders/{order_id}/
    

    Response

     Status: 200 OK
    
    {
        "url": "http://127.0.0.1:8000/api/v1/orders/30/",
        "delivery": 2,
        "items": [
            {
                "product": 5,
                "product_name": "Package of meat (5kg)",
                "product_unit_price": "110.00",
                "quantity": 2,
                "amount": "220.00"
            }
        ],
        "amount": "220.00",
        "message": "",
        "is_open": true
    }
    

    Create an order

    POST /api/v1/orders/
    
    {   
        "delivery": 3,
        "items": [
            {
                "product": 14,
                "quantity": 2
            }
        ],
        "message": "is it possible to come and pick it up the next day?"
    
    }
    

    Request must follow this rules:

    • delivery order_deadline must not be passed
    • a user can only post an order per delivery
    • all item products must be available in delivery.products

    Response

    Status: 201 Created
    
    (Created order detail)
    

    Update an order

    Orders can be updated until delivery.order_deadline.

    PUT /api/v1/orders/{order_id}/
    
    {   
        "delivery": 3,
        "items": [
            {
                "product": 14,
                "quantity": 1
            }
        ]
    }
    

    Response

     Status: 200 OK
    
    (Updated order detail)
    

    Delete an order

    DELETE /api/v1/orders/{order_id}/
    

    Response

     Status: 204 No Content
    

    UI Language

    Translation strings has been used for all text of user and admin interfaces, so all of them can be extracted into messages files (.po) to facilitate translation.

    In addition to default language (English), French translation is available and can be set on settings.py:

    LANGUAGE_CODE = "fr"
    

    The server must be restarted to apply changes.

    Adding new translations

    From base directory, run:

    django-admin makemessages -l LANG
    django-admin makemessages -d djangojs -l LANG
    

    Where LANG can be, for example: es, es_AR, de …

    This will generate django.po and djangojs.mo translation files inside locale/LANG/LC_MESSAGES folder.

    Once all msgstr in .po files are translated, run:

    django-admin compilemessages
    

    This will generate corresponding .mo files.

    Visit original content creator repository https://github.com/daniel-ob/baskets
  • CHILL-TUBE

    Chill Tube

    Chill Tube is a Next.js application that allows users to watch their favourite animes add free. Application allows users to create watchlist, continue watching where they left of on any device, write comments and give reviews to episodes and shows. System administarors can add and modify shows and episodes.

    This project has been created for a client in Ljubljana, Slovenia as final project for their highschool.

    Setup

    Dependecies

    This app requires Node.js to run.
    Before you start your app you need to install all npm dependecies.
    You can do that by opening yout project in terminal and running npm install --legacy-peer-deps command;

    Environment variables

    After installing all dependencies you will need to set up your environment variables.
    You can do that by copying .env.example to .env(.local) and changing the values of all necessery feilds.

    Database

    In order of the app to function propertly you will need to create a database. SQL script can be found in database folder of the project.

    Startup

    Dev

    To start your aplication in dev mode use npm run dev command in your terminal with app folder as cwd.
    After that your app will be started on port 3000.

    Production

    To start your app in production mode you need to create production build. You can do that by running npm run build command in your terminal. After build is completed you can start app by simply entering commant npm start in your command line interface.
    If you wish to host your app firstly you need to create a production build. Then you can transfer your files and folders to the hosting (exept node_modules and .git folders and all env files). After you transfer all your files you can create your node app in your hosting. Check out with your hosting provider how you can do that. After creating a app you can add all your env variables, run npm install and start your app.

    Visit original content creator repository
    https://github.com/MihajloMilojevic/CHILL-TUBE

  • TextureClassification

    Texture Classification

    Implementation of different texture feature extractors and texture classifiers for both Grayscale and RGB images.

    The implemented algorithms are tested on Outex-TC databases. Algorithms for grayscale images are tested on Outex_TC_00010-r database, while algorithms for RGB images are tested on Outex_TC_00010-c database.

    Algorithms are implemented in either MATLAB or Python.

    Grayscale Texture Image Classification

    Methods used for Feature Extraction of grayscale texture images are based on:

    1. Gray level co-occurrence matrix (GLCM)
    2. Discrete wavelet packet transform (DWPT)

    Inside the folder, there is example of plotting Wavelet energy (DWPTExample) that is used for extracting features for texture classification, using function PlotDWPT.

    The classification is done in the Main_program, as well as classifier evaluation.
    There is also an implementation of SVM classifier that classifies texture images using Wavelet features.

    Inside the folder, there are three .mat files containing extracted GLCM features, Wavelet features and obtained results:

    RGB Texture Image Classification

    Features of RGB texture images are extracted using:

    1. Discrete wavelet packet transform (DWPT)
    2. Pretrained AlexNet CNN without the last layer

    Wavelet based classification of RGB images uses the same feature extraction (Wavelet_image_features) as in the case of grayscale images. In contrast to extracted feature vector of grayscale images, the extracted features of RGB images have three channels for each color channel (R, G, B). The extracted features are given in Wavelet_Features_RGB.

    The classification is done in the Main_program_RGB.

    Pretrained AlexNet is used to extract 4096 dimensional feature vector. Implementation is given in
    AlexNet_Feature_Extraction.

    The extracted feature vectors are given in two seperate files:

    Dimension of the extracted feature vector is reduced using the PCA algorithm, after which an
    SVM classifier is trained on new features.

    Visit original content creator repository
    https://github.com/analazovic/TextureClassification

  • bls-data-extract

    Average Price Data (AP) Database

    Table of Contents


    Introduction

    The Average Price Data (AP) from the Bureau of Labor Statistics (BLS) provides detailed information on average consumer prices for household fuels, motor fuels, and food items. Collected monthly across various urban areas in the United States, this data is crucial for measuring the price levels of specific items over time and across different regions.

    This repository contains scripts and a database schema to set up and manage a local SQLite database for storing and querying the AP data. It includes tools for downloading the latest data from the BLS website and fetching Consumer Price Index (CPI) data via the BLS API.


    Database Structure

    The database comprises several tables that store data about items, areas, periods, series, and the actual price observations. Understanding the schema and relationships between these tables is crucial for constructing accurate SQL queries and extracting meaningful insights.

    Tables and Their Relationships

    1. ap_item

      • Purpose: Stores information about the items for which average prices are recorded.
      • Fields:
        • item_code (TEXT, PRIMARY KEY): Unique identifier for each item.
        • item_name (TEXT): Descriptive name of the item.
      • Example Entries:
        • 701111: Flour, white, all purpose, per lb. (453.6 gm)
        • 702111: Sugar, white, all sizes, per lb. (453.6 gm)
    2. ap_area

      • Purpose: Contains information about the geographic areas covered in the survey.
      • Fields:
        • area_code (TEXT, PRIMARY KEY): Unique identifier for each area.
        • area_name (TEXT): Descriptive name of the area.
      • Example Entries:
        • 0000: U.S. city average
        • A100: Northeast Urban
        • S200: South Urban
    3. ap_period

      • Purpose: Defines the periods (months) for which data is collected.
      • Fields:
        • period (TEXT, PRIMARY KEY): Code representing the period (e.g., M01 for January).
        • period_abbr (TEXT): Abbreviation of the period name (e.g., JAN).
        • period_name (TEXT): Full name of the period (e.g., January).
      • Example Entries:
        • M01: JAN, January
        • M02: FEB, February
    4. ap_series

      • Purpose: Provides metadata about each time series, linking items and areas.
      • Fields:
        • series_id (TEXT, PRIMARY KEY): Unique identifier for each time series.
        • area_code (TEXT): References ap_area.area_code.
        • item_code (TEXT): References ap_item.item_code.
        • series_title (TEXT): Title describing the series.
        • footnote_codes (TEXT): Any associated footnotes.
        • begin_year (INTEGER): First year of data availability.
        • begin_period (TEXT): First period of data availability.
        • end_year (INTEGER): Last year of data availability.
        • end_period (TEXT): Last period of data availability.
      • Relationships:
        • ap_series.area_codeap_area.area_code
        • ap_series.item_codeap_item.item_code
    5. ap_data_current

      • Purpose: Holds current year-to-date average price data.
      • Fields:
        • series_id (TEXT): References ap_series.series_id.
        • year (INTEGER): Year of the observation.
        • period (TEXT): References ap_period.period.
        • value (REAL): Observed average price.
        • footnote_codes (TEXT): Any associated footnotes.
      • Primary Key: (series_id, year, period)
      • Relationships:
        • ap_data_current.series_idap_series.series_id
        • ap_data_current.periodap_period.period
    6. ap_data_food

      • Purpose: Contains average price data for food items.
      • Fields and Relationships: Same as ap_data_current.
    7. ap_data_gasoline

      • Purpose: Contains average price data for gasoline.
      • Fields and Relationships: Same as ap_data_current.
    8. ap_data_householdfuels

      • Purpose: Contains average price data for household fuels.
      • Fields and Relationships: Same as ap_data_current.
    9. ap_seasonal

      • Purpose: Stores information about seasonal adjustment codes.
      • Fields:
        • seasonal_code (TEXT, PRIMARY KEY): Code indicating seasonal adjustment.
        • seasonal_text (TEXT): Description of the seasonal code.

    Schema Definition

    Below is the SQL schema used to create the tables:

    CREATE TABLE ap_item (
        item_code TEXT PRIMARY KEY,
        item_name TEXT
    );
    
    CREATE TABLE ap_area (
        area_code TEXT PRIMARY KEY,
        area_name TEXT
    );
    
    CREATE TABLE ap_period (
        period TEXT PRIMARY KEY,
        period_abbr TEXT,
        period_name TEXT
    );
    
    CREATE TABLE ap_seasonal (
        seasonal_code TEXT PRIMARY KEY,
        seasonal_text TEXT
    );
    
    CREATE TABLE ap_series (
        series_id TEXT PRIMARY KEY,
        area_code TEXT,
        item_code TEXT,
        series_title TEXT,
        footnote_codes TEXT,
        begin_year INTEGER,
        begin_period TEXT,
        end_year INTEGER,
        end_period TEXT
    );
    
    CREATE TABLE ap_data_current (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_food (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_gasoline (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_householdfuels (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE cpi_info (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );

    Data Flow for Query Construction

    To construct a query that retrieves specific average price data, follow these steps:

    1. Identify the Item:

      • Use ap_item to find the item_code corresponding to the desired item_name.
    2. Identify the Area:

      • Use ap_area to find the area_code corresponding to the desired area_name.
    3. Find the Series ID:

      • Use ap_series to find the series_id matching both the item_code and area_code.
    4. Retrieve Data Observations:

      • Use the series_id to query the appropriate ap_data_* table (ap_data_food, ap_data_gasoline, etc.) for the desired year and period.
    5. Join Period Information:

      • Use ap_period to translate period codes into readable period_name values.

    Setup Instructions

    Prerequisites

    • Python 3.6+
    • SQLite3
    • pip (Python package installer)
    • Virtual Environment (recommended)

    Installing Dependencies

    # Clone the repository
    git clone https://github.com/yourusername/ap-database.git
    cd ap-database
    
    # Create a virtual environment (optional but recommended)
    python -m venv venv
    source venv/bin/activate  # On Windows, use venv\Scripts\activate
    
    # Install required Python packages
    pip install -r requirements.txt

    Setting Up the Database

    Run the seed_data.py script to initialize the database:

    python seed_data.py

    This script will:

    • Create the SQLite database named average_price_data.db.
    • Create all the tables as per the schema.
    • Load data from local CSV files into the database.

    Downloading Data

    Use the get_http.py script to download the necessary data files from the BLS website:

    python get_http.py

    This script will:

    • Download specified files from the BLS FTP site.
    • Save them in the downloads directory.

    Note: Ensure that the downloads directory exists or will be created by the script.

    Fetching CPI Data via API

    Use the get_api.py script to fetch Consumer Price Index (CPI) data via the BLS API:

    1. Obtain a BLS API Key:

      • Register at the BLS website to obtain an API key.

      • Store the API key in a .env file in the project root:

        BLS_API_KEY=your_api_key_here
        
    2. Run the Script:

      python get_api.py

      This script will:

      • Fetch CPI data for specified series_id, start_year, and end_year.
      • Save the data into text files and insert it into the cpi_info table in the database.

    Usage Examples

    Sample Query Structure

    To retrieve specific average price data, you can use the following SQL query structure:

    SELECT
      d.year,
      p.period_name,
      i.item_name,
      a.area_name,
      d.value
    FROM
      ap_data_food AS d
    JOIN
      ap_series AS s ON d.series_id = s.series_id
    JOIN
      ap_item AS i ON s.item_code = i.item_code
    JOIN
      ap_area AS a ON s.area_code = a.area_code
    JOIN
      ap_period AS p ON d.period = p.period
    WHERE
      i.item_name = 'Sugar, white, all sizes, per lb. (453.6 gm)'
      AND a.area_name = 'U.S. city average'
    ORDER BY
      d.year, p.period_name;

    This query will:

    • Retrieve the average price of sugar per pound in U.S. city averages.
    • Display the data ordered by year and month.

    Important Notes

    • Primary Keys:

      • Ensure uniqueness and efficient data retrieval.
    • Foreign Keys:

      • Maintain referential integrity between tables.
    • Data Partitioning:

      • Data is divided into specific tables based on item categories for optimized access.
    • Understanding Period Codes:

      • Monthly Periods:
        • M01 to M12 represent January to December.
      • Annual Averages:
        • M13 may be used to represent annual average data.

    Contributing

    Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.


    License

    This project is licensed under the MIT License.


    Visit original content creator repository
    https://github.com/ashakoen/bls-data-extract