Author: gx8r9g18grln

  • text-diff

    TextDiff

    JavaScript diff library with support for visual, HTML-formatted output

    This repository contains the diff functionality of the google-diff-match-patch library by Neil Fraser, turned into a node module which is suitable for requireing into projects.

    Example usage

    var Diff = require('text-diff');
    
    var diff = new Diff(); // options may be passed to constructor; see below
    var textDiff = diff.main('text1', 'text2'); // produces diff array
    diff.prettyHtml(textDiff); // produces a formatted HTML string

    Initialization options

    Arguments may be passed into the Diff constructor in the form of an object:

    • timeout: Number of seconds to map a diff before giving up (0 for infinity).
    • editCost: Cost of an empty edit operation in terms of edit characters.

    Example initialization with arguments: var diff = new Diff({ timeout: 2, editCost: 6 });

    Documentation

    The API documentation below has been modified from the original API documentation.

    Initialization

    The first step is to create a new diff object (see example above). This object contains various properties which set the behaviour of the algorithms, as well as the following methods/functions:

    main(text1, text2) => diffs

    An array of differences is computed which describe the transformation of text1 into text2. Each difference is an array. The first element specifies if it is an insertion (1), a deletion (-1) or an equality (0). The second element specifies the affected text.

    main("Good dog", "Bad dog") => [(-1, "Goo"), (1, "Ba"), (0, "d dog")]

    Despite the large number of optimisations used in this function, diff can take a while to compute. The timeout setting is available to set how many seconds any diff’s exploration phase may take (see “Initialization options” section above). The default value is 1.0. A value of 0 disables the timeout and lets diff run until completion. Should diff time out, the return value will still be a valid difference, though probably non-optimal.

    cleanupSemantic(diffs) => null

    A diff of two unrelated texts can be filled with coincidental matches. For example, the diff of “mouse” and “sofas” is [(-1, "m"), (1, "s"), (0, "o"), (-1, "u"), (1, "fa"), (0, "s"), (-1, "e")]. While this is the optimum diff, it is difficult for humans to understand. Semantic cleanup rewrites the diff, expanding it into a more intelligible format. The above example would become: [(-1, "mouse"), (1, "sofas")]. If a diff is to be human-readable, it should be passed to cleanupSemantic.

    cleanupEfficiency(diffs) => null

    This function is similar to cleanupSemantic, except that instead of optimising a diff to be human-readable, it optimises the diff to be efficient for machine processing. The results of both cleanup types are often the same.

    The efficiency cleanup is based on the observation that a diff made up of large numbers of small diffs edits may take longer to process (in downstream applications) or take more capacity to store or transmit than a smaller number of larger diffs. The diff.EditCost property sets what the cost of handling a new edit is in terms of handling extra characters in an existing edit. The default value is 4, which means if expanding the length of a diff by three characters can eliminate one edit, then that optimisation will reduce the total costs.

    levenshtein(diffs) => int

    Given a diff, measure its Levenshtein distance in terms of the number of inserted, deleted or substituted characters. The minimum distance is 0 which means equality, the maximum distance is the length of the longer string.

    prettyHtml(diffs) => html

    Takes a diff array and returns a string of pretty HTML. Deletions are wrapped in <del></del> tags, and insertions are wrapped in <ins></ins> tags. Use CSS to apply styling to these tags.

    Tests

    Tests have not been ported to this library fork, however tests are available in the original library. If you would like to port tests over, you will need to do some function call renaming (viz. the diff_ prefix has been removed from functions in the fork) and remove tests specific to the “patch” and “match” functionalities of the original library.

    Visit original content creator repository

  • fintech-sector-portfolio-analysis

    Fintech Sector Portfolio Analysis

    This project is a portfolio analysis of different sectors within Fintech to better understand the growth, correlations, and profitability of Fintech companies. Through analyzing different calculations such as finding the cumulative returns, 21-day rolling average and standard deviations, Sharpe ratios, and running Monte Carlo simulations, our analysis should provide insights into which sectors/stocks in Fintech would be good investments.

    Through our analysis we hope to answer the following questions:

    1. How does each Fintech sector, and the individual stocks within them, perform over time?
    2. Which sectors and individual stocks are the best potential investments?
    3. What are the relationships or correlations between each sector?
    4. Based on what we learned about the sectors and stocks, if we were to come up with a Fintech portfolio what stocks would we choose?

    Data Used

    We currently use yfinance to grab 5 years of closing price data (from the time that the notebook code is ran) for the following sectors if Fintech:

    1. Paytech
      • PayPal
      • Square
      • MasterCard
    2. Lending
      • LendingTree
      • LendingClub
      • Black Knight
    3. Banking
      • Fiserv
      • Jack Henry and Associates
      • FIS (Fidelity National Information Services)

    Summary

    Our project begins by using yfinance to collect 5 years of closing price data from each stock within our chosen Fintech sectors. We then reformat the data to produce the daily returns needed to run the majority of our calculations.

    DataFrame that holds the daily returns of each stock in the Paytech sector Line plot showing the daily returns of PYPL

    Then we calculate metrics such as the cumulative returns, rolling 21-day averages and standard deviations, betas, and sharpe ratios. Each metric is visualized to better see how each stock or sector compares to each other.

    Composite line plot of cumulative returns for all stocks Composite bar plot of sharpe ratios for all stocks

    Next we run 5-year and 1-year prediction Monte Carlo simulations where for each prediction length we run even and uneven weight distributed simulations for each sector. The uneven weight distributions are a 50%, 30%, 20% split where each stock within the sector is weighted based on its Sharpe ratio. With the results from the Monte Carlo simulation we describe the 95% confidence intervals assuming we start with a portfolio value of $10,000

    Resulting line plot of the cumulative returns predicted by the evenly weighted Monte Carlo simulation for Paytech 95 percent confidence intervals based on the Monte Carlo prediction results

    Finally, with the insights we gained from the calculations, we create a custom portfolio made up of the highest Sharpe ratio stocks from all sectors. We then run similar calculations by averaging the daily returns of all stocks to get the portfolio’s daily returns for 5 years. With the daily returns we calculate annualized metrics and run Monte Carlo simulations to give predictions of how the portfolio might do in 1 or 5 years, again by looking at the 95% confidence intervals for each prediction with different weight distribution.

    Annualized average return and cumulative returns for custom portfolio


    Technologies

    This is a Python 3.7 project ran using a JupyterLab in a conda dev environment.

    The following dependencies are used:

    1. Jupyter – Running code
    2. Conda (4.13.0) – Dev environment
    3. Pandas (1.3.5) – Data analysis
    4. Matplotlib (3.5.1) – Data visualization
    5. Numpy (1.21.5) – Data calculations + Pandas support
    6. hvPlot (0.8.1) – Interactive Pandas plots
    7. holoviews (1.15.2+) – REQUIRED VERSION Interactive Pandas plots; will cause error without proper version
    8. Alpaca Trade API (2.3.0) – Required for Monte Carlo simulations
    9. yfinance (0.1.87) – Data collection
    10. nbdime (3.1.1) – Fixing merge conflicts in Jupyter notebooks

    Installation Guide

    If you would like to run the program in JupyterLab, install the Anaconda distribution and run jupyter lab in a conda dev environment.

    To ensure that your notebook runs properly you can use the requirements.txt file to create an exact copy of the conda dev environment used to create the notebook.

    Create a copy of the conda dev environment with conda create --name myenv --file requirements.txt

    Then install the requirements with conda install --name myenv --file requirements.txt


    Usage

    The Jupyter notebook fintech-sector-portfolio-analysis.ipynb will provide all steps of the data collection, preparation, and analysis. Data visualizations are shown inline and accompanying analysis responses are provided.

    Note that the data collection and the Monte Carlo simulations change every time that the notebook is ran. The data collection gets data 5 years from the date ran and the Monte Carlo simulations use that data to produce randomized results to predict what the portfolio performance could look like based on that data.

    The data collected and shown in the examples were from the time that this project was started – November 2022.

    Our presentation slides for this project are in the Resources folder: Fintech-Sector_Analysis-Presentation


    Contributors

    Ethan Silvas
    Naomy Velasco
    Karim Bouzina
    Jeff Crabill


    License

    This project uses the GNU General Public License

    Visit original content creator repository
  • laravel-policy-soft-cache

    Laravel Policy Soft Cache Package

    Latest Version on Packagist Tests Fix PHP code style issues Total Downloads

    Optimize your Laravel application’s performance with soft caching for policy checks. This package caches policy invocations to prevent redundant checks within the same request lifecycle, enhancing your application’s response times.

    Requirements

    This package is compatible with Laravel 9, 10, 11, 12, and PHP >= 8.1.

    Installation

    You can install the package via composer:

    composer require innoge/laravel-policy-soft-cache

    You can publish the config file with:

    php artisan vendor:publish --provider="Innoge\LaravelPolicySoftCache\LaravelPolicySoftCacheServiceProvider"

    This is the contents of the published config file:

    return [
        /*
         * When enabled, the package will cache the results of all Policies in your Laravel application
         */
        'cache_all_policies' => env('CACHE_ALL_POLICIES', true),
    ];

    You can also use CACHE_ALL_POLICIES in your .env file to change it.

    CACHE_ALL_POLICIES=false
    

    Usage

    By default, this package caches all policy calls of your entire application. You can disable this behavior by setting the cache_all_policiesconfiguration to false. Now you can specify which Policy classes should be soft cached and which not. If you want your policy to be cached, add the Innoge\LaravelPolicySoftCache\Contracts\SoftCacheable interface.

    For Example:

    use Innoge\LaravelPolicySoftCache\Contracts\SoftCacheable;
    
    class UserPolicy implements SoftCacheable
    {
        ...
    }
    

    Clearing the cache

    Sometimes you want to clear the policy cache after model changes. You can call the Innoge\LaravelPolicySoftCache::flushCache(); method.

    Known Issues

    Gate::before and Service Provider Load Order

    When the innoge/laravel-policy-soft-cache package is installed in an application that utilizes Gate::before, typically defined in the AuthServiceProvider, a conflict may arise due to the order in which service providers are loaded.

    Resolution Steps

    To resolve this issue, follow these steps:

    1. Manual Service Provider Registration: Add \Innoge\LaravelPolicySoftCache\LaravelPolicySoftCacheServiceProvider::class to the end of the providers array in your config/app.php. This manual registration ensures that the LaravelPolicySoftCacheServiceProvider loads after all other service providers, including AuthServiceProvider.

      'providers' => [
          // Other Service Providers
      
          \Innoge\LaravelPolicySoftCache\LaravelPolicySoftCacheServiceProvider::class,
      ],
    2. Disable Auto-Discovery for the Package: To prevent Laravel’s auto-discovery mechanism from automatically loading the service provider, add innoge/laravel-policy-soft-cache to the dont-discover array in your composer.json. This step is crucial for maintaining the manual load order.

      "extra": {
          "laravel": {
              "dont-discover": ["innoge/laravel-policy-soft-cache"]
          }
      },
    3. Reinstall Dependencies: After updating your composer.json, run composer install to apply the changes. This step is necessary for the changes to take effect.

      composer install

    Testing

    composer test

    Changelog

    Please see CHANGELOG for more information on what has changed recently.

    Contributing

    Please see CONTRIBUTING for details.

    Security Vulnerabilities

    Please review our security policy on how to report security vulnerabilities.

    Credits

    License

    The MIT License (MIT). Please see License File for more information.

    Visit original content creator repository
  • weatheralerts

    An integration to get weather alerts from weather.gov

    GitHub release (latest by date) GitHub hacs_badge

    GitHub stars GitHub GitHub issues GitHub commits since latest release (by SemVer)

    Breaking changes

    v0.1.2

    • The YAML packages currently available for weatheralerts v0.1.2 are not compatible with prior versions of weatheralerts. Older YAML packages should still work with weatheralerts v0.1.2, however, the most recent YAML package files contain new features and fixes.

    Installation Quickstart

    This qickstart install guide assumes you are already familiar with custom component installation and with the Home Assistant YAML configuration. If you need more detailed step-by-step instructions, check the links at the bottom for detailed instructions. Troubleshooting information, weatheralerts YAML package information, and Lovelace UI examples are also included in the Links at the bottom.

    Install the weatheralerts integration via HACS. After installing via HACS, don’t restart Home Assistant yet. We will do that after completing the YAML platform configuration.

    You will need to find your zone and county codes by looking for your state or marine zone at https://alerts.weather.gov/. Once at https://alerts.weather.gov/, click the Land area with zones link and you will find a list of states with Public Zones and County Zones links. Once you find your state , click into the Public Zones and County Zones links and find the respective codes for your county. All you will need are just the first two letters (your state abbreviation) and the last three digits (zone/county ID number) of your zone code and county code to put into the platform configuration. The zone and county ID numbers are not usually the same number, so be sure to look up both codes. For marine zones, go to https://alerts.weather.gov/, click the Marine regions/areas with zones link and you will find a list of marine areas with Zones links. In the Zones link for the marine area you are interested in, find the exact marine zone. The first two letters of the marine zone is what will be used for the state configuration option, and the last three digits is what will be used for the zone configuration option (omit any leading zeros).

    Once installed and you have your state (or marine zone) abbreviation and ID numbers, add the weatheralerts sensor platform to your configuration. If your state is Wisconsin and your county is Outagamie, then the state abbreviation is WI, the zone ID number is 038, and the county ID number is 087. For the ID numbers, remove any leading zeros and your YAML platform configuration would look something like this:

    sensor:
      platform: weatheralerts
      state: WI
      zone: 38
      county: 87

    Once your configuration is saved, restart Home Assistant.

    That completes the integration (custom component) installation.

    Check the Links below for more detailed instructions, troubleshooting, and for YAML package and Lovelace UI usage and examples.

    Updating via HACS

    Check the Breaking Changes section of this README to see if you need to manually update the YAML packages or make any changes to your custom YAML or Lovelace UI cards. Simply use the Update button for the weatheralerts integration within HACS if there are no breaking changes and then restart Home Assistant.

    Links

    Reconfiguration via UI

    You can reconfigure the integration through the Home Assistant UI:

    1. Go to Settings > Devices & Services.
    2. Find the Weather Alerts integration and click on it.
    3. Click Configure.
    4. Update the State, Zone, and County values.
    5. Click Save. The integration will automatically reload.

    Todo list

    • Add more documentation
    • Add config flow to allow UI-based configuration (eliminate yaml-based platform configuration)
    • Create alternative (possibly simpler) YAML package or move some template sensors into the integration
    • Add backup weather alert source for occasions when weather.gov json feed is experiencing an outage
    • Add Canadian weather alerts
    Visit original content creator repository
  • clear-regex

    clear-regex

    Write regular expressions clearly with comments and named matches.

    Usage

    The most convenient way to use clear-regex is with tagged template literals. This way it’s easy to

    • split regular expression accross lines
    • add comments
    • use other regexes or values inside the new regex

    const crx = require('clear-regex');
    
    const yearRx = /\d{4}/;
    const monthRx = /\d{2}/;
    const dayRx = /\d{2}/;
    
    const myNewRegex = crx`
            # this matches date strings like '2019-01-13'
            ${yearRx}-      # this is the year part
            ${monthRx}-     # month part
            ${dayRx}        # day part
        `;

    The comments, whitespace and newline characters get stripped away and the result of the above is the same as

    const myNewRegex = /\d{4}-\d{2}-\d{2}/;

    Comments

    The comments begin with a # character and go until the end of the line. Use them to explain what a certain part of your regular expression does.

    const phoneNumber = crx`
        # matches phone numbers
        #
        # there can be any number of digits
        # optionally grouped with spaces or dashes
        #
        ^\s*            # optional whitespace at the beginning
        (\+|0+)         # start with a plus or zeros
        (               # begin group od digits
            ([- ])?     # optional delimiter
            (\d+)       # some digits
        )+              # end group of digits
        \s*$            # optional whitespace at the end
    `;

    Placeholders

    If you use clear-regex as a tagged template literal, you can use placeholders to insert literal values or other regular expressions into your new regex. This makes dynamic regexes and reuse convenient.

    const year = 2019;
    const monthRx = /\d{2}/;
    const dayRx = /\d{2}/;
    
    // match a date date string in 2019
    const dateRx = crx`^${year}-${monthRx}-${dayRx}`;

    Named matching groups

    You can use give names to your matching groups. This will make it easier to retrieve them from a matching result. The name tags look like ?<name>.

    const regex = crx`^
        (?<year>\\d{4})-
        (?<month>\\d{2})-
        (?<day>\\d{2})
    $`;
    
    '2019-01-13'.match(regex);
    
    // the result contains the groups prop with
    // the named matches
    //
    // {
    //     ...
    //     groups: {
    //         day: '13',
    //         month: '01',
    //         year: '2019'
    //     }
    // };

    Using flags

    To use flags with the tagged template literals, start and end your reges with slashes, as you normally would, and put the flags after the closing slash.

    const regex = crx`/
        ice
        (cream|coffee)
        /gi`;
    
    // this is the same as
    const sameRegex = /ice(cream|coffee)/gi;

    Visit original content creator repository

  • mobile-carrier-bot

    mobile-carrier-bot

    Build Status

    A bot to access mobile carrier services, currently supports

    • Three IE
    • TIM
    • Iliad

    🚧🚧🚧🚧🚧🚧🚧🚧🚧🚧
    ⚠️ Heavy Work in Progress ⚠️
    🚧🚧🚧🚧🚧🚧🚧🚧🚧🚧

    TODO (not in order):

    • skeleton, plugins, setup
    • architecture docs and diagrams
    • healtcheck status/info/env
    • expose prometheus metrics via endpoint
    • expose JVM metrics via JMX
    • scalatest and scalacheck
    • codecov or alternatives
    • telegram client (polling)
    • slack client (webhook)
    • scrape at least 2 mobile carrier services to check balance
    • (polling) notify for low credits and expiry date
    • in-memory db with Ref
    • doobie db with PostgreSQL and H2
    • if/how store credentials in a safe way
    • authenticated endpoints as alternative to telegram/slack
    • write pure FP lib alternative to scala-scraper and jsoup (I will never do this!)
    • fix scalastyle and scalafmt
    • slate static site for api
    • gitpitch for 5@4 presentation
    • constrain all types with refined where possible
    • travis
    • travis automate publish to dockerhub
    • publish to dockerhub
    • create deployment k8s chart
    • create argocd app
    • statefulset with PostgreSQL
    • alerting with prometheus to slack
    • grafana dashboard
    • backup/restore logs and metrics even if re-create cluster
    • generate and publish scaladoc
    • fix manual Circe codecs with withSnakeCaseMemberNames config
    • add gatling stress tests
    • add integration tests
    • manage secrets in k8s

    Endpoints

    # healt checks
    http :8080/status
    http :8080/info
    http :8080/env
    

    Development

    # test
    sbt test -jvm-debug 5005
    sbt "test:testOnly *HealthCheckEndpointsSpec"
    sbt "test:testOnly *HealthCheckEndpointsSpec -- -z statusEndpoint"
    
    # run with default
    TELEGRAM_API_TOKEN=123:xyz sbt app/run

    sbt aliases

    • checkFormat checks format
    • format formats sources
    • update checks outdated dependencies
    • build checks format and runs tests

    Other sbt plugins

    • dependencyTree shows project dependencies

    Deployment

    # build image
    sbt clean docker:publishLocal
    
    # run temporary container
    docker run \
      --rm \
      --name mobile-carrier-bot \
      niqdev/mobile-carrier-bot-app:0.1
    
    # access container
    docker exec -it mobile-carrier-bot bash
    
    # publish
    docker login
    docker tag niqdev/mobile-carrier-bot-app:0.1 niqdev/mobile-carrier-bot-app:latest
    docker push niqdev/mobile-carrier-bot-app:latest

    Charts

    # print chart
    helm template -f charts/app/values.yaml charts/app/
    
    # apply chart
    helm template -f charts/app/values.yaml charts/app/ | kubectl apply -f -
    
    # verify healtcheck
    kubectl port-forward deployment/<DEPLOYMENT_NAME> 8888:8080
    http :8888/status
    
    # logs
    kubectl logs <POD_NAME> -f

    Visit original content creator repository

  • baskets

    Baskets

    coverage_badge

    A website to manage orders for local food baskets.

    Project built using Django, Bootstrap and JavaScript.

    Baskets screenshot

    Table of contents

    1. Background and goal
    2. Features
    3. Dependencies
    4. Run using Docker
    5. Populate dummy database
    6. Configure SMTP
    7. Tests run
    8. API Reference
    9. UI Language

    Background and goal

    This project has been developed to meet a real need for a local association.

    The aforementioned association centralizes orders for several local food producers. Thus, food baskets are delivered regularly to users.

    Before the deployment of this application, administrators got orders from users via SMS or email.

    Baskets app aims to save them time by gathering user orders in one unique tool.

    Payments are managed outside this application.

    Features

    User interface

    • Sign In page:
      • User account creation entering personal information and setting a password.
      • Passwords are validated to prevent weak passwords.
      • A verification email is sent to user with a link to a page allowing them to confirm their email address.
    • Sign Up page:
      • Users with verified email can log in using their email and password.
    • Next Orders page:
      • Shows the list of deliveries for which we can still order, in chronological order.
      • Clicking on each delivery opens a frame below showing delivery details: delivery date, last day to order and available products arranged by producer.
      • User can create one order per delivery.
      • Orders can be updated or deleted until their deadline.
    • Order history page:
      • Shows a list of user’s closed orders in reverse chronological order.
      • Clicking on each order will open its details below.
    • Password reset:
      • In “Login” page, a link allows users to request password reset entering their email address.
      • If an account exists for that email address, an email is sent with a link to a page allowing to set a new password.
    • Profile page:
      • Clicking on username loads a page where users can view and update its profile information.
    • Contact us page:
      • A link on footer loads a page with a contact form. The message will be sent to all staff members.

    All functionalities except “contact” requires authentication.

    Admin interface

    Users with both “staff” and “superuser” status can access admin interface.

    • Users page:
      • Manage each user account: activate/deactivate, set user groups and set staff status.
    • Groups page:
      • Manage groups.
      • Email all group users via a link.
    • Producers page:
      • Manage producers and its products (name and unit price).
      • Deactivate whole producer or single product:
        • Deactivated products won’t be available for deliveries.
        • If a product with related opened order items is deactivated, those items will be removed and a message will be shown to email affected users.
      • Export .xlsx file containing recap of monthly quantities ordered for each product (one sheet per producer).
      • If a product has related opened order items and its unit price changes, related opened orders will be updated and a message will be shown to email affected users.
    • Deliveries page:
      • Create/update deliveries, setting its date, order deadline, available products and optional message.
        • If “order deadline” is left blank, it will be set to ORDER_DEADLINE_DAYS_BEFORE before delivery date.
      • View total ordered quantity for each product to notify producers. A link allows seeing all related Order Items.
      • If a product is removed from an opened delivery, related opened orders will be updated and a message will be shown to email affected users.
      • In “Deliveries list” page:
        • View “number of orders” for each delivery, which links to related orders.
        • Export order forms:
          • Once a delivery deadline is passed, a link will be shown to download delivery order forms in xlsx format.
          • The file will contain one sheet per order including user information and order details.
        • Action to email users having ordered for selected deliveries.
    • Orders page:
      • View user orders and, if necessary, create and update them.
      • In “Orders list” page:
        • Export .xlsx file containing recap of monthly order amounts per user.
        • If one or several orders are deleted, a message will be shown to email affected users.

    Other

    • Mobile-responsiveness: This has been achieved using Bootstrap framework for user interface. Moreover, Django admin interface is also mobile responsive.
    • API: User orders can be managed using an API. See API reference for further details.
    • UI Translation: Translation strings have been used for all UI text to facilitate translation. See UI Language for further details.

    Dependencies

    In addition to Django, the following libraries have been used:

    Required versions can be seen in requirements (pip) or Pipfile (pipenv).

    Run using Docker

    $ git clone https://github.com/daniel-ob/baskets.git
    $ cd baskets
    

    Then run:

    $ docker compose up -d
    

    And finally, create a superuser (for admin interface):

    $ docker compose exec web python manage.py createsuperuser
    

    Please note that, for simplicity, console email backend is used by default for email sending, so emails will be written to stdout.

    Populate dummy database

    docker exec baskets-web sh -c "python manage.py shell < populate_dummy_db.py"
    

    Configure SMTP

    • Change backend on config/settings.py:
    EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
    
    • Set SMTP server config on .envs/.local/.web:
    # SMTP server config (if used)
    EMAIL_HOST=
    EMAIL_HOST_PASSWORD=
    EMAIL_HOST_USER=
    EMAIL_PORT=
    EMAIL_USE_TLS=
    

    Tests run

    Be sure you have ChromeDriver installed to run Selenium tests.

    First launch db container:

    $ docker compose up -d db
    

    Then open virtual environment and install all dependencies:

    $ pipenv shell
    (baskets)$ pipenv install --dev
    

    Finally, run all tests:

    (baskets)$ python manage.py test
    

    To run only functional tests:

    (baskets)$ python manage.py test baskets.tests.test_functional
    

    API Reference

    A Postman collection to test the API can be found here.

    Browsable API

    If settings.DEBUG is set to True, browsable API provided by REST framework can be visited on http://127.0.0.1:8000/api/v1/

    API Authentication

    All API endpoints requires token authentication.

    JWT token pair can be requested on /api/token/ providing username and password (request Body form-data). This returns access and refresh tokens.

    To authenticate requests, access token must be added to headers:

    Authorization: Bearer {{access_token}}
    

    When expired, access token can be refreshed on /api/token/refresh/ providing refresh token.

    List open deliveries

    List deliveries for which we can still order.

    GET /api/v1/deliveries/
    

    Response

     Status: 200 OK
    
    [
        {
            "url": "http://127.0.0.1:8000/api/v1/deliveries/3/",
            "date": "2023-06-27",
            "order_deadline": "2023-06-23"
        },
        {
            "url": "http://127.0.0.1:8000/api/v1/deliveries/2/",
            "date": "2023-07-04",
            "order_deadline": "2023-06-30"
        }
    

    Get delivery detail

    GET /api/v1/deliveries/{delivery_id}/
    

    Response

     Status: 200 OK
    
    {
        "id": 2,
        "date": "2023-05-30",
        "order_deadline": "2023-05-25",
        "products_by_producer": [
            {
                "name": "producer1",
                "products": [
                    {
                        "id": 1,
                        "name": "Eggs (6 units)",
                        "unit_price": "2.00"
                    },
                ]
            },
            {
                "name": "producer2",
                "products": [
                    {
                        "id": 2,
                        "name": "Big vegetables basket",
                        "unit_price": "1.15"
                    }
                ]
            }
        ],
        "message": "This week meat producer is on vacation",
    }
    

    List user orders

    GET /api/v1/orders/
    

    Response

     Status: 200 OK
    
    [
        {
            "url": "http://127.0.0.1:8000/api/v1/orders/30/",
            "delivery": {
                "url": "http://127.0.0.1:8000/api/v1/deliveries/2/",
                "date": "2023-07-04",
                "order_deadline": "2023-06-30"
            },
            "amount": "220.00",
            "is_open": true
        }
    ]
    

    Get order detail

    GET /api/v1/orders/{order_id}/
    

    Response

     Status: 200 OK
    
    {
        "url": "http://127.0.0.1:8000/api/v1/orders/30/",
        "delivery": 2,
        "items": [
            {
                "product": 5,
                "product_name": "Package of meat (5kg)",
                "product_unit_price": "110.00",
                "quantity": 2,
                "amount": "220.00"
            }
        ],
        "amount": "220.00",
        "message": "",
        "is_open": true
    }
    

    Create an order

    POST /api/v1/orders/
    
    {   
        "delivery": 3,
        "items": [
            {
                "product": 14,
                "quantity": 2
            }
        ],
        "message": "is it possible to come and pick it up the next day?"
    
    }
    

    Request must follow this rules:

    • delivery order_deadline must not be passed
    • a user can only post an order per delivery
    • all item products must be available in delivery.products

    Response

    Status: 201 Created
    
    (Created order detail)
    

    Update an order

    Orders can be updated until delivery.order_deadline.

    PUT /api/v1/orders/{order_id}/
    
    {   
        "delivery": 3,
        "items": [
            {
                "product": 14,
                "quantity": 1
            }
        ]
    }
    

    Response

     Status: 200 OK
    
    (Updated order detail)
    

    Delete an order

    DELETE /api/v1/orders/{order_id}/
    

    Response

     Status: 204 No Content
    

    UI Language

    Translation strings has been used for all text of user and admin interfaces, so all of them can be extracted into messages files (.po) to facilitate translation.

    In addition to default language (English), French translation is available and can be set on settings.py:

    LANGUAGE_CODE = "fr"
    

    The server must be restarted to apply changes.

    Adding new translations

    From base directory, run:

    django-admin makemessages -l LANG
    django-admin makemessages -d djangojs -l LANG
    

    Where LANG can be, for example: es, es_AR, de …

    This will generate django.po and djangojs.mo translation files inside locale/LANG/LC_MESSAGES folder.

    Once all msgstr in .po files are translated, run:

    django-admin compilemessages
    

    This will generate corresponding .mo files.

    Visit original content creator repository
  • sense-embedding

    Code style: black

    Sense Embedding

    Datasets

    The datasets used can be found here:

    Preprocessing

    Before train the model, We need to preprocess the raw dataset. We take EuroSense as example. EuroSense consist of a a single large XML file (21GB uncompressed for the high precision version), even though it is a multilingual corpus, we will use only the English sentences. The file can be filtered with the filter_eurosense() function inside preprocessing/eurosense.py file.

    The EuroSense files contains sentences, with already tokenized text. Each annotation marks the sense for a word in text identified by the anchor attribute. Each annotation provides the lemma of the word it is tagging and the synset id.

    <sentence id="0">
      <text lang="en">It is vital to minimise the grey areas and  [...] </text>
      <annotations>
        <annotation lang="en" type="NASARI" anchor="areas" lemma="area"
            coherenceScore="0.2247" nasariScore="0.9829">bn:00005513n</annotation>
        ...
      </annotations>
    </sentence>
    

    It is convenient to preprocess the XML in a single text file, replacing all the anchors with the corresponding lemma_synset. A line in the parsed dataset, from the example above, is

    It is vital to minimise the grey area_bn:00005513n and [...]
    

    We can run the parse.py script to obtain this parsed dataset.

    python code/parse.py es -i es_raw.xml -o parsed_es.txt 

    Train

    Gensim implementation of Word2Vec and FastText are used to train the sense vectors. The train script is implemented in the train.py file. To start the training phase, run

    python code/train.py parsed_es.txt -o sensembed.vec

    For a complete list of options run python code/train.py -h

    usage: train.py [-h] -o OUTPUT [-m MODEL] [--model_path SAVE_MODEL]
                    [--min-count MIN_COUNT] [--iter ITER] [--size SIZE]
                    input [input ...]
    
    positional arguments:
      input                 paths to the corpora
    
    optional arguments:
      -h, --help            show this help message and exit
      -o OUTPUT             path where to save the embeddings file
      -m MODEL              model implementation, w2v=Word2Vec, ft=FastText
      --model_path SAVE_MODEL
                            path where to save the model file
      --min-count MIN_COUNT
                            ignores all words with total frequency lower than this
      --iter ITER           number of iterations over the corpus
      --size SIZE           dimensionality of the feature vectors

    The output should be in the Word2Vec format, where the vocab is composed of lemma_synset1 and the corresponding vector.

    number_of_senses embedding_dimension
    lemma1_synset1 dim1 dim2 dim3 ... dimn
    lemma2_synset2 dim1 dim2 dim3 ... dimn
    

    Evaluation

    The evaluation consists of measuring the similarity or relatedness of pairs of words. Word similarity datasets (WordSimilarity-353) consists of a list of pairs of words. For each pair we have a score of similarity established by human annotators

    Word1     Word2     Gold
    --------  --------  -----
    tiger     cat       7.35
    book      paper     7.46
    computer  keyboard  7.62
    

    The scoring algorithm inside score.py computes the cosine similarity between all the senses for each pair of word in the word similarity datasets.

    for each w_1, w_2 in ws353:
       S_1 <- all sense embeddings associated with w_1
       S_2 <- all sense embeddings associated with w_2
       score <- -1.0
       For each pair s_1 in S_1 and s_2 in S_2 do:
           score = max(score, cos(s_1, s_2))
       return score
    

    where cos(s_1, s_2) is the cosine similarity between vector s_1 and s_2.

    Now we check our scores against the gold ones in the dataset. To do so, we calculate the Spearman correlation between gold similarity scores and cosine similarity scores.

    Word1     Word2     Gold   Cosine
    --------  --------  -----  ------
    tiger     cat       7.35   0.452
    book      paper     7.46   0.784
    computer  keyboard  7.62   0.643
    
    Spearman([7.35, 7.46, 7.62], [0.452, 0.784, 0.643]) = 0.5
    

    The score can be computed by running the following command

    python code/score.py sensembed.vec resources/ws353.tab

    Visit original content creator repository

  • s3cmd-backup

    Simple s3cmd backup script

    This is a simple script that compresses a specified folder and loads it into an aws s3 bucket using s3cmd.

    Getting Started

    Prerequisites

    • Unix-like operating system
    • s3cmd is a command line tool that makes it possible to put/get files into/from a s3 bucket. Please make sure that s3cmd is installed and configured.
      Check the s3cmd installation guide here and run s3cmd --configure after installation.
    • zip or tar should be installed
    • A configured aws s3 bucket

    Installation

    via curl

    $ curl -Lo backup https://git.io/fhMJy

    via wget

    $ wget -O backup https://git.io/fhMJy

    via httpie

    $ http -do backup https://git.io/fhMJy

    via git clone

    $ git clone https://github.com/MoonLiightz/s3cmd-backup.git
    $ cd s3cmd-backup

    Note

    Don’t forget to give the script execution permissions.

    $ chmod +x backup

    Configuration

    To configure the script, edit the downloaded file with an editor of your choice like nano or something else. At the top of the file you will find some configuration options.

    Config Option Description
    BACKUP_PATH Path to the location without ending / of the folder which should be saved.
    Example: If you want to save the folder myData located in /root than you should set BACKUP_PATH="/root"
    BACKUP_FOLDER Name of the folder which should be saved.
    Example: Based on the previous example you should set BACKUP_FOLDER="myData"
    BACKUP_NAME Name of the backup file. The date on which the backup was created is automatically appended to the name.
    Example: If you set BACKUP_NAME="myData-backup" the full name of the backup is myData-backup_year-month-day_hour-minute-second
    S3_BUCKET_NAME Name of the s3 bucket where the backups will be stored.
    Important: The name of the bucket and not the Bucket-ARN
    Example: S3_BUCKET_NAME="mybucket"
    S3_BUCKET_PATH Path in the s3 bucket without ending / where the backups will be stored.
    Example: S3_BUCKET_PATH="/backups"
    COMPRESSION The compression which will be used. Available are zip and tar
    Example: For zip set COMPRESSION="zip" and for tar set COMPRESSION="tar"
    TMP_PATH Path to a location where files can be temporarily stored. The path must exist.
    Example: TMP_PATH="/tmp"

    Usage

    Basic

    The script supports the following functionalities.

    Create Backup

    This command creates a backup and loads it into the specified s3 bucket.

    $ ./backup create

    List Backups

    With this command you can list the backups stored in the s3 bucket.

    $ ./backup list

    Download Backup

    To download a backup from the s3 bucket to the server you can use this command.

    $ ./backup download <filename>

    Cron

    You can also execute the script with a cronjob. The following example creates a backup every night at 2 a.m.

    0 2 * * * <path_to_script>/backup create

    License

    s3cmd-backup is released under the MIT license.

    Visit original content creator repository

  • tailwindscss

    Build Status License: MIT npm version

    Tailwind SCSS

    SCSS version of Tailwind CSS for people who don’t use modern module bundler.

    Why??

    The original Tailwind CSS use PostCSS for its CSS preprocessor. Therefore, we have to use Node.js module bundler (Webpack, Rollup etc) in order to get fully control over Tailwind’s customization. Unfortunately, there are many cases (mainly on legacy apps) where we couldn’t use Node.js and I don’t want this issue to prevent us from using Tailwind CSS.

    By using SCSS format, I hope that more people especially who have non Node.js apps can start using Tailwind CSS and progressively improve their tech stack to use the original version eventually.

    We try to keep this library as close as possible with future development of Tailwind CSS.

    Installation

    Using npm:

    npm install tailwindscss --save
    

    or yarn:

    yarn add tailwindscss
    

    Usage

    To use it on your SCSS, you can import entire style like this:

    @import "tailwindscss";

    or you can choose to import one by one:

    @import "tailwindscss/base";
    @import "tailwindscss/utilities";

    Configuration

    By default, it will generate all styles which are equivalent to Tailwind CSS’s default configuration. Below is what our configuration looks like.

    @import 'tailwindscss/src/helper';
    
    $prefix: ''; // Selector prefix;
    $separator: '_'; // Separator for pseudo-class and media query modifier
    
    $theme-colors: (
      transparent: transparent,
      black: #000,
    ); // Theme configuration
    
    $variants-text-color: (responsive, hover, focus); // Variants configuration
    
    $core-plugins-text-color: true; // Set false to disable utility

    To customize utilities, you need to import your own configuration file at the top of your SCSS file.

    @import "path-to/tailwind.config.scss";
    @import "tailwindscss/base";
    @import "tailwindscss/utilities";

    For starting out, you can run npx tailwindscss init to get full configuration.

    Note: You need to configure how your bundler can refer to tailwindscss node_modules yourself.

    Documentation

    Head over to the original website for more guideline about utilities. Of course, some sections like installation are not applicable for this library.

    Limitation

    Because of SCSS limitation, below features cannot be provided in this library:

    SCSS does not support several characters like colon (:) and backslash (/) because it will always be evaluated as language’s keywords. For your safety, keep your prefix and separator with dashes (-) and underscore (_) characters.

    TODO

    • important flag
    • responsive
    • pseudo-class (hover, focus, focus-within, active and group-hover)
    • colors
    Visit original content creator repository