Blog

  • bls-data-extract

    Average Price Data (AP) Database

    Table of Contents


    Introduction

    The Average Price Data (AP) from the Bureau of Labor Statistics (BLS) provides detailed information on average consumer prices for household fuels, motor fuels, and food items. Collected monthly across various urban areas in the United States, this data is crucial for measuring the price levels of specific items over time and across different regions.

    This repository contains scripts and a database schema to set up and manage a local SQLite database for storing and querying the AP data. It includes tools for downloading the latest data from the BLS website and fetching Consumer Price Index (CPI) data via the BLS API.


    Database Structure

    The database comprises several tables that store data about items, areas, periods, series, and the actual price observations. Understanding the schema and relationships between these tables is crucial for constructing accurate SQL queries and extracting meaningful insights.

    Tables and Their Relationships

    1. ap_item

      • Purpose: Stores information about the items for which average prices are recorded.
      • Fields:
        • item_code (TEXT, PRIMARY KEY): Unique identifier for each item.
        • item_name (TEXT): Descriptive name of the item.
      • Example Entries:
        • 701111: Flour, white, all purpose, per lb. (453.6 gm)
        • 702111: Sugar, white, all sizes, per lb. (453.6 gm)
    2. ap_area

      • Purpose: Contains information about the geographic areas covered in the survey.
      • Fields:
        • area_code (TEXT, PRIMARY KEY): Unique identifier for each area.
        • area_name (TEXT): Descriptive name of the area.
      • Example Entries:
        • 0000: U.S. city average
        • A100: Northeast Urban
        • S200: South Urban
    3. ap_period

      • Purpose: Defines the periods (months) for which data is collected.
      • Fields:
        • period (TEXT, PRIMARY KEY): Code representing the period (e.g., M01 for January).
        • period_abbr (TEXT): Abbreviation of the period name (e.g., JAN).
        • period_name (TEXT): Full name of the period (e.g., January).
      • Example Entries:
        • M01: JAN, January
        • M02: FEB, February
    4. ap_series

      • Purpose: Provides metadata about each time series, linking items and areas.
      • Fields:
        • series_id (TEXT, PRIMARY KEY): Unique identifier for each time series.
        • area_code (TEXT): References ap_area.area_code.
        • item_code (TEXT): References ap_item.item_code.
        • series_title (TEXT): Title describing the series.
        • footnote_codes (TEXT): Any associated footnotes.
        • begin_year (INTEGER): First year of data availability.
        • begin_period (TEXT): First period of data availability.
        • end_year (INTEGER): Last year of data availability.
        • end_period (TEXT): Last period of data availability.
      • Relationships:
        • ap_series.area_codeap_area.area_code
        • ap_series.item_codeap_item.item_code
    5. ap_data_current

      • Purpose: Holds current year-to-date average price data.
      • Fields:
        • series_id (TEXT): References ap_series.series_id.
        • year (INTEGER): Year of the observation.
        • period (TEXT): References ap_period.period.
        • value (REAL): Observed average price.
        • footnote_codes (TEXT): Any associated footnotes.
      • Primary Key: (series_id, year, period)
      • Relationships:
        • ap_data_current.series_idap_series.series_id
        • ap_data_current.periodap_period.period
    6. ap_data_food

      • Purpose: Contains average price data for food items.
      • Fields and Relationships: Same as ap_data_current.
    7. ap_data_gasoline

      • Purpose: Contains average price data for gasoline.
      • Fields and Relationships: Same as ap_data_current.
    8. ap_data_householdfuels

      • Purpose: Contains average price data for household fuels.
      • Fields and Relationships: Same as ap_data_current.
    9. ap_seasonal

      • Purpose: Stores information about seasonal adjustment codes.
      • Fields:
        • seasonal_code (TEXT, PRIMARY KEY): Code indicating seasonal adjustment.
        • seasonal_text (TEXT): Description of the seasonal code.

    Schema Definition

    Below is the SQL schema used to create the tables:

    CREATE TABLE ap_item (
        item_code TEXT PRIMARY KEY,
        item_name TEXT
    );
    
    CREATE TABLE ap_area (
        area_code TEXT PRIMARY KEY,
        area_name TEXT
    );
    
    CREATE TABLE ap_period (
        period TEXT PRIMARY KEY,
        period_abbr TEXT,
        period_name TEXT
    );
    
    CREATE TABLE ap_seasonal (
        seasonal_code TEXT PRIMARY KEY,
        seasonal_text TEXT
    );
    
    CREATE TABLE ap_series (
        series_id TEXT PRIMARY KEY,
        area_code TEXT,
        item_code TEXT,
        series_title TEXT,
        footnote_codes TEXT,
        begin_year INTEGER,
        begin_period TEXT,
        end_year INTEGER,
        end_period TEXT
    );
    
    CREATE TABLE ap_data_current (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_food (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_gasoline (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE ap_data_householdfuels (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );
    
    CREATE TABLE cpi_info (
        series_id TEXT,
        year INTEGER,
        period TEXT,
        value REAL,
        footnote_codes TEXT,
        PRIMARY KEY(series_id, year, period)
    );

    Data Flow for Query Construction

    To construct a query that retrieves specific average price data, follow these steps:

    1. Identify the Item:

      • Use ap_item to find the item_code corresponding to the desired item_name.
    2. Identify the Area:

      • Use ap_area to find the area_code corresponding to the desired area_name.
    3. Find the Series ID:

      • Use ap_series to find the series_id matching both the item_code and area_code.
    4. Retrieve Data Observations:

      • Use the series_id to query the appropriate ap_data_* table (ap_data_food, ap_data_gasoline, etc.) for the desired year and period.
    5. Join Period Information:

      • Use ap_period to translate period codes into readable period_name values.

    Setup Instructions

    Prerequisites

    • Python 3.6+
    • SQLite3
    • pip (Python package installer)
    • Virtual Environment (recommended)

    Installing Dependencies

    # Clone the repository
    git clone https://github.com/yourusername/ap-database.git
    cd ap-database
    
    # Create a virtual environment (optional but recommended)
    python -m venv venv
    source venv/bin/activate  # On Windows, use venv\Scripts\activate
    
    # Install required Python packages
    pip install -r requirements.txt

    Setting Up the Database

    Run the seed_data.py script to initialize the database:

    python seed_data.py

    This script will:

    • Create the SQLite database named average_price_data.db.
    • Create all the tables as per the schema.
    • Load data from local CSV files into the database.

    Downloading Data

    Use the get_http.py script to download the necessary data files from the BLS website:

    python get_http.py

    This script will:

    • Download specified files from the BLS FTP site.
    • Save them in the downloads directory.

    Note: Ensure that the downloads directory exists or will be created by the script.

    Fetching CPI Data via API

    Use the get_api.py script to fetch Consumer Price Index (CPI) data via the BLS API:

    1. Obtain a BLS API Key:

      • Register at the BLS website to obtain an API key.

      • Store the API key in a .env file in the project root:

        BLS_API_KEY=your_api_key_here
        
    2. Run the Script:

      python get_api.py

      This script will:

      • Fetch CPI data for specified series_id, start_year, and end_year.
      • Save the data into text files and insert it into the cpi_info table in the database.

    Usage Examples

    Sample Query Structure

    To retrieve specific average price data, you can use the following SQL query structure:

    SELECT
      d.year,
      p.period_name,
      i.item_name,
      a.area_name,
      d.value
    FROM
      ap_data_food AS d
    JOIN
      ap_series AS s ON d.series_id = s.series_id
    JOIN
      ap_item AS i ON s.item_code = i.item_code
    JOIN
      ap_area AS a ON s.area_code = a.area_code
    JOIN
      ap_period AS p ON d.period = p.period
    WHERE
      i.item_name = 'Sugar, white, all sizes, per lb. (453.6 gm)'
      AND a.area_name = 'U.S. city average'
    ORDER BY
      d.year, p.period_name;

    This query will:

    • Retrieve the average price of sugar per pound in U.S. city averages.
    • Display the data ordered by year and month.

    Important Notes

    • Primary Keys:

      • Ensure uniqueness and efficient data retrieval.
    • Foreign Keys:

      • Maintain referential integrity between tables.
    • Data Partitioning:

      • Data is divided into specific tables based on item categories for optimized access.
    • Understanding Period Codes:

      • Monthly Periods:
        • M01 to M12 represent January to December.
      • Annual Averages:
        • M13 may be used to represent annual average data.

    Contributing

    Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.


    License

    This project is licensed under the MIT License.


    Visit original content creator repository
    https://github.com/ashakoen/bls-data-extract

  • luismayta

    Hi there, I’m Luis Mayta👋
    GitHub luismayta stars GitHub luismayta follower

    I’m a Passionate Coder {;} | Cryptocurrency and AI Enthusiast | Terraform, Go, Python, Haskell and Dart Lover!!

    • 🔭 I’m currently working on being a Backend/DevOps, cloud consultant, and solutions architect to make your business resilient, robust, and scalable.
    • 🌱 I’m currently learning a ton of stuff related to APIs, message queues services, cloud services (focus AWS), and proficiency in programming languages (or runtime libraries 😬) like Go, Rust, Python, Ruby, Scala, Deno, HCL (Terraform), and others…
    • 👯 I’m looking to collaborate on YouTube.
    • 🥅 Goals: Contribute more to Open Source projects.
    • 💬 Ask me about Flutter, Go, Python or any tech related stuff.

    Favorite Quotes ⚡

    • Microsoft isn’t evil, they just make really crappy operating systems. ~ Linus Torvalds
    • In real open source, you have the right to control your own destiny. ~ Linus Torvalds
    • Talk less, Do More.
    • Es mejor cometer un error que no hacer nada.
    • No todas las personas que se esfuerzan tienen exito, pero no conozco ninguna persona que tenga exito y no se haya esforzado.
    • Eres consciente de tus limitaciones pero las limitaciones estan ahi para poder superarlas, el resto solo depende de ti.

    👨‍💻 Repositories I created recently

    📓 Gists I wrote

    🚀 Latest releases I’ve contributed to

    👯 Check out some of my recent followers

    ❤️ Sponsors

    Many thanks everyone! 🙏

    📫 How to reach me

    Linkedin Badge Gmail Badge Telegram

    Visit original content creator repository https://github.com/luismayta/luismayta
  • Action-Detection

    Action-Detection(Action-Net)

    Action-Net is a dataset containing images of 16 different human actions.


    Action-Net is a dataset containing images of human actions , collected in order to ensure that machine learning systems can be trained to understand human actions, gestures and activities. This is part of DeepQuest AI‘s to train machine learning systems to perceive, understand and act accordingly in solving problems in any environment they are deployed.

    This is the first release of the Action-Net dataset. It contains 19,200 images that span cover 16 classes. The classes included in this release are:

    • Calling
    • Clapping
    • Cycling
    • Dancing
    • Drinking
    • Eating
    • Fighting
    • Hugging
    • Kissing
    • Laughing
    • Listening to Music
    • Running
    • Sitting
    • Sleeping
    • Texting
    • Using Laptop

    There are 1,200 images for each category, with 1000 images for trainings and 200 images for testing . We are working on adding more categories in the future and will continue to improve the dataset.

    >>> DOWNLOAD, TRAINING AND PREDICTION:
    The Action-Net dataset is provided for download in the release section of this repository. You can download the dataset via the link below.
    https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_v1.zip

    We have also provided a python codebase to download the images, train ResNet50 on the images and perform prediction using a pretrained model (also using ResNet50) provided in the release section of this repository. The python codebase is contained in the action_net.py file and the model class labels for prediction is also provided the model_class.json. The pretrained ResNet50 model is available for download via the link below.
    https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_ex-060_acc-0.745313.h5
    This pre-trained model was trained for 60 epochs only, but it achieved over 74% accuracy on 3200 test images. You can see the prediction results on new images that were not part of the dataset in the Prediction Results section below. More experiments will enhance the accuracy of the model.
    Running the experiment or prediction requires that you have Tensorflow, and Keras, OpenCV and ImageAI installed. You can install this dependencies via the commands below.


    – Tensorflow 1.4.0 (and later versions) Install or install via pip

     pip3 install --upgrade tensorflow 

    – OpenCV Install or install via pip

     pip3 install opencv-python 

    – Keras 2.x Install or install via pip

     pip3 install keras 

    – ImageAI 2.0.3

    pip3 install imageai 

    eating  :  100.0
    drinking  :  3.92037860508232e-09
    using-laptop  :  6.944534465709584e-11
    calling  :  5.7910951424891555e-12
    


    eating  :  99.44907426834106
    drinking  :  0.5508399568498135
    using-phone  :  5.766927415606915e-05
    sitting  :  1.1222620344142342e-05
    


    fighting  :  99.97442364692688
    running  :  0.01658390392549336
    dancing  :  0.008970857743406668
    sitting  :  7.210289965087213e-06
    


    laughing  :  99.99998807907104
    clapping  :  1.3144966715117334e-05
    calling  :  4.0294068526236515e-06
    eating  :  4.981405066217803e-07
    


    running  :  99.99852180480957
    calling  :  0.0009251662959286477
    listening-to-music  :  0.0002909338491008384
    cycling  :  0.00024121977730828803
    

    Visit original content creator repository https://github.com/YashNagare/Action-Detection
  • Action-Detection

    Action-Detection(Action-Net)

    Action-Net is a dataset containing images of 16 different human actions.


    Action-Net is a dataset containing images of human actions , collected in order to ensure that machine learning systems can be trained to understand human actions, gestures and activities. This is part of DeepQuest AI‘s to train machine learning systems to perceive, understand and act accordingly in solving problems in any environment they are deployed.

    This is the first release of the Action-Net dataset. It contains 19,200 images that span cover 16 classes. The classes included in this release are:

    • Calling
    • Clapping
    • Cycling
    • Dancing
    • Drinking
    • Eating
    • Fighting
    • Hugging
    • Kissing
    • Laughing
    • Listening to Music
    • Running
    • Sitting
    • Sleeping
    • Texting
    • Using Laptop

    There are 1,200 images for each category, with 1000 images for trainings and 200 images for testing . We are working on adding more categories in the future and will continue to improve the dataset.

    >>> DOWNLOAD, TRAINING AND PREDICTION:
    The Action-Net dataset is provided for download in the release section of this repository. You can download the dataset via the link below.
    https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_v1.zip

    We have also provided a python codebase to download the images, train ResNet50 on the images and perform prediction using a pretrained model (also using ResNet50) provided in the release section of this repository. The python codebase is contained in the action_net.py file and the model class labels for prediction is also provided the model_class.json. The pretrained ResNet50 model is available for download via the link below.
    https://github.com/OlafenwaMoses/Action-Net/releases/download/v1/action_net_ex-060_acc-0.745313.h5
    This pre-trained model was trained for 60 epochs only, but it achieved over 74% accuracy on 3200 test images. You can see the prediction results on new images that were not part of the dataset in the Prediction Results section below. More experiments will enhance the accuracy of the model.
    Running the experiment or prediction requires that you have Tensorflow, and Keras, OpenCV and ImageAI installed. You can install this dependencies via the commands below.


    – Tensorflow 1.4.0 (and later versions) Install or install via pip

     pip3 install --upgrade tensorflow 

    – OpenCV Install or install via pip

     pip3 install opencv-python 

    – Keras 2.x Install or install via pip

     pip3 install keras 

    – ImageAI 2.0.3

    pip3 install imageai 

    eating  :  100.0
    drinking  :  3.92037860508232e-09
    using-laptop  :  6.944534465709584e-11
    calling  :  5.7910951424891555e-12
    


    eating  :  99.44907426834106
    drinking  :  0.5508399568498135
    using-phone  :  5.766927415606915e-05
    sitting  :  1.1222620344142342e-05
    


    fighting  :  99.97442364692688
    running  :  0.01658390392549336
    dancing  :  0.008970857743406668
    sitting  :  7.210289965087213e-06
    


    laughing  :  99.99998807907104
    clapping  :  1.3144966715117334e-05
    calling  :  4.0294068526236515e-06
    eating  :  4.981405066217803e-07
    


    running  :  99.99852180480957
    calling  :  0.0009251662959286477
    listening-to-music  :  0.0002909338491008384
    cycling  :  0.00024121977730828803
    

    Visit original content creator repository https://github.com/YashNagare/Action-Detection
  • My-Docs-Front

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository
    https://github.com/AmitrajitDas/My-Docs-Front

  • ux_tools

    Table of Contents

    UX – User Experience

    User Experience – UX occurs when the user comes into contact with a product.

    • How to listen to the user and extract needs?
    • How to create products that people need?

    Differences between UI and UX

    UI – User Interface UX – User Experience
    What the user finds and sees when they arrive at a website or app Focuses on what the user perceives of the website or app
    Everything that allows you to interact with the website or app, from buttons to forms Focuses on whether the content was useful, whether the navigation was enriching, and whether it was easily manageable and intuitive
    Ensures that the website or app looks good and works well on all platforms: mobile, web, tablets Includes target research, psychology, design, and marketing

    UX Tools

    UX tools are essential because you have got to have a wide range of options to help you create the things you need.

    UX Tools and Methods

    ux_tools_by_mafda

    1. Product strategy

    How to discover and create what people need.

    2. Ideas Generation

    How to externalize and communicate what you have in mind.

    3. Planning and Development

    How to execute good ideas.

    4. Validation and Research

    How to evaluate the solution of the problems and improve the product.

    5. Interface Design

    How to transform ideas into sketch, prototypes, and products. Usability and utility.

    6. Success Metrics

    Objectively evaluate the results of the product.

    • KPI (Key Performance Metrics)
    • CTR (Click Through Rate)
    • NPS (Net Promoter Score)
    • DAU (Daily Active Users)
    • Churn Rate
    • LTV (Lifetime Value)
    • HEART (Happiness, Engagement, Adoption, Retention, Task Success)

    7. Launch MVP

    How to launch the MVP. Learn fast and succeed.

    More interesting tools

    (strategy)

    • UXpressia Visualize customer experience and collaborate with your team.
    • FlowMapp Full stack UX platform.
    • Strategyzer Creators of the Business Model Canvas.

    (ideas)

    • MoodBoard Build beautiful, simple, free moodboards.
    • Miro Be Creative. Be Productive. From Anywhere.
    • overflow Create interactive user flow diagrams that tell a story.

    (planning)

    • Trello Keep track of everything.
    • Asana Manage your team’s work, projects, & tasks online.
    • Craft Build intuitive Roadmaps, prioritize features, connect them with your dev teams.

    (validation)

    • UXArmy Online usability testing platform.
    • UserTesting Leader in user research and software testing.
    • Unbonce Design Beautiful Landing Pages.
    • Klickpages Tool for create landing pages.
    • OptimalSort Discover how people categorize information.
    • Optimizely Best-known tool for A/B testing.
    • SurveyMonkey Send and evaluate surveys quickly and easily.

    (metrics)

    • UserZoom Brands can test and measure UX on websites, apps, and prototypes.
    • Delighted Measure and evaluate qualitative metrics.
    • Hotjar Website Heatmaps & Behavior Analytics Tools.
    • Google Analytics Measure ans track your sites and applications.

    (design and launch)

    • Axure Powerful Prototyping and Developer.
    • Marvel Rapid prototyping, testing and handoff.
    • inVision Create rich interactive prototypes.
    • Framer Best prototyping tool for teams.
    • Flinto Create interactive and animated prototypes.
    • Principle Design animated and interactive user interfaces.
    • JustInMind From wireframes to highly interactive prototypes.

    More interesting links

    UX checklist

    Other repositories

    Reference


    made with 💙 by mafda

    Visit original content creator repository https://github.com/mafda/ux_tools
  • DEB

    Discord Emoji Backup

    Discord Emoji Backup is a utility that allows you to create a backup of the emojis from your Discord server or all of the Discord servers your Discord account is in.

    Features

    • Cross-platform support for Windows and Linux
    • Filters out duplicate emojis using SHA1 file checksums.
    • Has an archival append option.

    GitHub release GitHub All Releases

    Screenshot

    Useage

    If you would like to download the emojis of a specific server, go into the sever and find a channel you can type in and type .b
    If you would like to download the emojis from ALL of the servers you are in, you can type .ba in any channel on discord, including DM’s.

    Settings

    Edit settings.json

    {
      "token":"Token_Here", // Replace Token_Here with your user token.
      "command_prefix":".", // This is the command prefix for your trigger commands(.b, .ba)
      "keep_dir":"false", // If this value is set to true it will append emojis to the folders rather than mirroring backups.
      "no_dupes":"true" // If this value is set to true it will filter duplicate emojis using SHA1 checksums.
    }
    

    How to obtain your token

    1. Press Ctrl+Shift+I (⌘⌥I on Mac) on Discord to show developer tools
    2. Navigate to the Application tab
    3. Select Local Storage > https://discordapp.com on the left
    4. Press Ctrl+R (⌘R) to reload
    5. Find token at the bottom and copy the value


    Disclaimer

    This is a self-bot which is against Discord ToS. Use it at your own risk.

    Visit original content creator repository https://github.com/noto-rious/DEB
  • pbg-ld

    pbg-ld: Linked Data Platform for Plant Breeding & Genomics

    DOI Published in PeerJ CI

    The pbg-ld software provides access to semantically integrated geno- & pheno-typic data on Solanaceae species (such as tomato and potato) and enables ranking of candidate genes associated with traits of interest.

    Prerequisites

    Install & deploy

    1. Clone this repository.

    git clone https://github.com/candYgene/pbg-ld.git

    2. Start Docker service(s).

    cd pbg-ld
    # list available services
    docker-compose config --services
    # start all services or one-by-one
    docker-compose up -d # or add [SERVICE]

    Alternatively, deploy the services on a remote server using Ansible Playbook.

    ansible-playbook -i inventory playbook.yml

    Note: grlc API can be deployed with SPARQL queries stored

    • locally (in the container)
    git clone https://github.com/candYgene/queries.git
    docker cp queries grlc:/home/grlc/
    • remotely (in a GitHub repo)

    Set the environment variables in docker-compose.yml:

    • GRLC_GITHUB_ACCESS_TOKEN
    • GRLC_SERVER_NAME (or CNAME, excluding URI scheme http(s)//:)
    • GRLC_SPARQL_ENDPOINT

    3. Access (meta)data in RDF.

    Overview of datasets

    RDF graphs:IRIs (A-Box)

    • SGN:
      • http://solgenomics.net/genome/Solanum_lycopersicum
      • http://solgenomics.net/genome/Solanum_pennellii
      • http://solgenomics.net/genome/Solanum_tuberosum
    • Ensembl:
      • http://plants.ensembl.org/Solanum_lycopersicum
      • http://plants.ensembl.org/Solanum_tuberosum
    • UniProt:
      • http://www.uniprot.org/proteomes/Solanum_lycopersicum
      • http://www.uniprot.org/proteomes/Solanum_tuberosum
    • QTLs: http://europepmc.org

    RDF graphs:IRIs (T-Box)

    • FALDO: http://biohackathon.org/resource/faldo.rdf
    • SO[FA]: http://purl.obolibrary.org/obo/so.owl
    • SIO: http://semanticscience.org/ontology/sio.owl
    • RO: http://purl.obolibrary.org/obo/ro.owl
    • GO: http://purl.obolibrary.org/obo/go.owl
    • UniProt Core: http://purl.uniprot.org/core/
    • PO: http://purl.obolibrary.org/obo/po.owl
    • TO: http://purl.obolibrary.org/obo/to.owl
    • SPTO: http://purl.bioontology.org/ontology/SPTO
    • PATO: http://purl.obolibrary.org/obo/pato.owl
    Visit original content creator repository https://github.com/candYgene/pbg-ld
  • kim-src.github.io

    Chirpy Jekyll Theme

    A minimal, responsive, and feature-rich Jekyll theme for technical writing.

    Gem Version  CI  Codacy Badge  GitHub license  996.icu

    Live Demo

    Devices Mockup

    Features

    • Dark / Light Theme Mode
    • Localized UI language
    • Pinned Posts on Home Page
    • Hierarchical Categories
    • Trending Tags
    • Table of Contents
    • Last Modified Date
    • Syntax Highlighting
    • Mathematical Expressions
    • Mermaid Diagrams & Flowcharts
    • Dark / Light Mode Images
    • Embed Videos
    • Disqus / Giscus / Utterances Comments
    • Built-in Search
    • Atom Feeds
    • PWA
    • Google Analytics / GoatCounter
    • SEO & Performance Optimization

    Documentation

    To learn how to use, develop, and upgrade the project, please refer to the Wiki.

    Contributing

    Contributions (pull requests, issues, and discussions) are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. For details, see the “Contributing Guidelines“.

    Credits

    Contributors

    Thanks to all the contributors involved in the development of the project!

    all-contributors —— Made with contrib.rocks

    Third-Party Assets

    This project is built on the Jekyll ecosystem and some great libraries, and is developed using VS Code as well as tools provided by JetBrains under a non-commercial open-source software license.

    The avatar and favicon for the project’s website are from ClipartMAX.

    License

    This project is published under MIT License.

    Visit original content creator repository https://github.com/kim-src/kim-src.github.io
  • PantherExtension

    Panther Mink Extension

    Mink extension for controlling Chrome | Firefox | Selenium thanks to Symfony Panther.

    Foreword:

    This extension is experimental (even if stable at 95%), some features may be missing.

    Installation:

    First, you need to install Symfony Panther and it’s required dependencies, then:

    composer require guikingone/panther-extension

    Usage:

    default:
      suites:
        default:
          contexts:
            - PantherExtension\Context\PantherContext:
            - PantherExtension\Context\WaitContext:
            # Your contexts
    
      extensions:
        PantherExtension\Extension\PantherExtension: ~
        Behat\MinkExtension:
          browser_name: chrome
          base_url: http://localhost
          sessions:
            default:
              panther:
                driver: 'chrome' # Or 'firefox', 'selenium', 'chrome' is the default value

    WaitContext has been introduced in 0.4

    If you need to use Selenium, just adapt the session configuration:

    # ...
    
      extensions:
        PantherExtension\Extension\PantherExtension: ~
        Behat\MinkExtension:
          browser_name: chrome
          base_url: http://localhost
          sessions:
            default:
              panther:
                driver: 'selenium'
                selenium:
                  hub_url: 'http://127.0.0.1:4444/wd/hub'

    Here’s a simple example using a POC project which call API-Platform website

    Feature:
      As a newbie in API-Platform, I want to document myself in many features
    
      Scenario: I should be able to see the main documentation                           
        Given I am on "https://github.com/"                                                                
        And I should see "REST and GraphQL framework to build modern API-driven projects"
    
      Scenario: I should be able to see the main documentation                                           
        Given I am on "https://github.com/"                                                                                
        And I go to "/docs/distribution/"                                                                
        Then I should see "API Platform is the most advanced API platform, in any framework or language."
    
      Scenario: I should be able to document myself about GraphQL support
        Given I am on "https://github.com/"                                                
        And I follow "Get started"                                       
        When I follow "Adding GraphQL Support"                           
        Then I should be on "/docs/distribution/#adding-graphql-support" 
        Then I should see "You now have a GraphQL API!"                  
    
      Scenario: I should be able to document myself about GraphQL support thanks to the search field
        Given I am on "https://github.com/"                                                                           
        When I fill in "SEARCH..." with "GraphQL"                                                   
        And I wait for "#algolia-autocomplete-listbox-0"                                            
        Then I should see "Documentation"                                                           
        And I should see "Search by"                                                                
        And I should see "Enabling GraphQL"                                                         
    
      Scenario: I should be able to test the demo                  
        Given I am on "https://github.com/"                                          
        And I follow "Demo"                                        
        Then I should be on "https://demo-client.api-platform.com/"
        When I follow "API"                                        
        Then I should be on "https://demo.api-platform.com/"       
    
      Scenario: I should be able to test the demo                                         
        Given I am on "https://github.com/"                                                                 
        And I follow "Community"                                                          
        And I create a new client "test" using the "chrome" driver                        
        Then I switch to client "test"                                                    
        And I go to "https://github.com/"                                                                   
        Then I should see "REST and GraphQL framework to build modern API-driven projects"
        Then I remove the client "test"                                                   
        Then I should see "API Platform's community"                                      
    
    6 scenarios (6 passed)
    29 steps (29 passed)
    0m28.61s (20.63Mb)

    Documentation

    The full documentation can be found here

    CI usage

    Please refer to Symfony Panther documentation about using it in CI environments.

    Development

    The project can be launched using:

    make boot

    Every test can be launched using:

    make tests

    For more commands or help, please use:

    make

    Contributing

    Just fork this repo and submit a new PR!

    Visit original content creator repository
    https://github.com/Guikingone/PantherExtension