Blog

  • Azure-AD-OAuth-SAML-Python-Demo-CLI-APP

    dada-cli

    DADA CLI is a CLI tool designed for testing the operation and features of Entra ID (Azure Active Directory). It enables the verification of functionalities in app registrations and enterprise applications, particularly focusing on SAML and OAuth 2.0.

    Features

    • OAuth 2.0, Open ID Connect

      • You can experience the Authorization Code Flow and Client Credentials Flow.
      • Display and decode the obtained access tokens and ID tokens, enabling you to inspect their contents.
      • Use of simple Graph API operations to experience Continuous Access Evaluation (CAE).
      • Load a certificate file and create a JWT assertion.

      By utilizing these features, you can easily verify the information and functionalities included in the token claims within Entra ID.”

    • SAML

      • You can easily experience SAML Single Sign-On (SSO) in Entra ID.
      • Generates SAML requests and decodes and displays SAML responses.
      • The command options allow you to specify the SAML request signature, Authentication Context, and Name ID Format.

      It allows you to easily test how Entra ID behaves when each of these settings is implemented.”

    Installation

    WSL

    1. Installation

      $ git clone https://github.com/iamkdada/Azure-AD-OAuth-SAML-Python-Demo-CLI-APP.git
      $ cd Azure-AD-OAuth-SAML-Python-Demo-CLI-APP
      $ python3 -m venv venv
      $ pip3 install -r requirements.txt
      $ export PATH="$PATH:$PWD/src"

    Windows

    1. Download this project
    2. Extract the downloaded project.
    3. Create a virtual environment at this project dir.
      > python -m venv venv
    4. Download the required libraries.
      > pip install -r requirements.txt
    5. Set up the environment variables.

      PowerShell

      > $Env:PATH += ";$PWD\src"

      Command Prompt

      set PATH=%PATH%;%CD%\src

    Plan to make improvements for easier installation.

    App Setting

    OIDC, OAuth App

    Entra ID (Azure AD)

    1. Browse to [Azure Portal]>[Microsoft Entra ID]>[App Registrations] and select New registration.
    2. Enter a Name for your application, for example dada-cli-oidc. Users of your app might see this name, and you can change it later.
    3. Select bellow
      • Account Type : “Accounts in this organizational directory only”
      • Platform”Public client/native (mobile & desktop)”
      • Redirect uri : http://localhost
    4. Select Register to create the application.

    DADA CLI

    1. Setting Tenant ID & Client ID
      dada configure --tenant-id "<Your Tenant ID>" --client-id "Registered Application ID"
    2. Let’s token request
      dada auth-code token-request

    SAML App

    Entra ID (Azure AD)

    1. Browse to [Azure Portal]>[Microsoft Entra ID]>[Enterprise Application] and select New application.
    2. Select Create your own application
    3. Enter a Name for your application, for example dada-cli-saml. Users of your app might see this name, and you can change it later.
    4. Select “Integrate any other application you don’t find in the gallery (Non-gallery)” and select Create
    5. Browse to [Single sign-on]>[SAML] and select Edit.
    6. Add identifier, for example dada.
    7. Add reply URL as http://localhost

    DADA CLI

    1. Setting Tenant ID & Client ID
      dada configure --tenant-id "<Your Tenant ID>" --entity-id "Registered Application Entity ID"
    2. Let’s saml request
      dada saml saml-request

    Example

    • auth code token request

      $ dada auth-code token-request
      "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Imk2bEdrM0ZaenhSY1ViMkMzbkV~~~~~~~~~
      "
    • decode token

      $ dada auth-code show --token id --decode
      {
      "aud": "<GUID>",
      "exp": 1700021556,
      "iat": 1700017656,
      "iss": "https://login.microsoftonline.com/<tenant id>/v2.0",
      "name": "hoge hoge",
      "nbf": 1700017656,
      "oid": "<GUID>",
      "preferred_username": "hoge@*****.com",
      "rh": "0.AXwAji****************************",
      "sub": "XWFP_8f3rjEyjvlUzzTVB0v0W2I3DGxVn0*********",
      "tid": "GUID",
      "uti": "sBSSf-s2rkujr********",
      "ver": "2.0"
      }
    • saml request

      $ dada saml saml-request --sign --force-authn
      "
      <decode saml resuponse>
      "

    Visit original content creator repository
    https://github.com/iamkdada/Azure-AD-OAuth-SAML-Python-Demo-CLI-APP

  • dchart

    Description

    A simple dynamic charting project to fetch specific assets on Backend and render graphics with d3 on Frontend. Using Giraffe on Backend and Fable on Frontend. Full F# web application.

    Architecture

    example

    The project was develop to save the assets on server using Giraffe and provide the assets on the GET routes /silver and /gold. Before initialize the Giraffe server, a synchronous request is done to save locally SILVER.json and GOLD.json on /tmp folder. After that, two asynchronous task are initiated to download each file after 1 minute. So then the server giraffe is enable. This ensure that when /silver and /gold will be called at least one version of SILVER.json and GOLD.json exists on the server.

    The frontend was designed to use Fable for transpiling F# |> JS and charting the graphics using Line Chart by binding the libs of d3.js and nvd3.js. Partially, some binds I got from an external repository: FableCharting

    Requirements

    Build

    You need fetch the nodejs dependencies with yarn install on the root of the repository. So then build the Frontend and Backend, in that order. For building Frontend and Backend a shell script util is provided: build.sh.

    The overall steps is:

    yarn install
    ./build.sh

    The build.sh will call dotnet restore && dotnet fable yarn-run build on the root of Frontend. This will generate a /public/bundle.js file as target. So then, will be called on src/Backend: dotnet clean; dotnet restore; msbuild; dotnet run. This should generate the binaries, copy the public folder as WebRoot and provide as static files.

    The build binaries should be generated on src/Backend/bin/Debug/netcoreapp1.1/ directory.

    License

    Unlicensed

    Author

    Manoel Vilela

    Visit original content creator repository https://github.com/ryukinix/dchart
  • phi-math-website

    Φ-Math’s website

    Phi-Math's logo. It is a white capital phi on a purple circle

    Prerequisites

    Clone

    We use --recursive (from most code editors, “Git Clone (Recursive)” as opposed to “Git Clone”) to ensure the theme subdmodule is cloned, too.

    git clone --recursive git@github.com:phi-math-website/phi-math-website.git
    

    Test

    If you leave this command running and modify content, your local test website instance will automatically update.

    hugo server -O
    

    If you make changes to some configuration values, the instance might not update. In this case. Stop and rerun the command instead.

    Build

    Once you are satisfied with your modifications, run:

    hugo
    

    This will generate your website in a new public/ folder. Next, we’ll see how to upload this to events.illc.uva.nl/Phi-Math.

    Deploy

    Prerequisites

    • SSH, or any program like SFTP, SCP or RSync that can work over the SSH protocol.

    Asking for access

    You can email webmaster Marco Vervoort your UvAnetID (e.g., 74958495) explaining that you are a new $\Phi$-Math board member. Marco will grant you access. For the remainder of this guide, we will refer to your UvAnetID as UVANETID within commands/configuration files.

    SSH setup

    Logging in directly to the server with SSH is blocked by ICTS. Instead you log in via a ‘gateway server’ with hostname pascal.ic.uva.nl. For testing you can log in with SSH using ssh UVANETID@pascal.ic.uva.nl and then login by executing the command on this server ssh UVANETID@goedel.fnwi.uva.nl.

    For practical use you can automate this ‘log through’. To do this:

    1. create a .ssh folder in your home/user directory (e.g., /home/marco or C:\Users\marco)
    2. inside the newly created folder, create a config file (with no file extension) that looks like this:
    Host phimathproxy
      Hostname pascal.ic.uva.nl
      User UVANETID
    
    Host phimath
      Hostname goedel.fnwi.uva.nl
      ProxyJump phimathproxy
      User UVANETID
    

    This ensures that when you run the command ssh phimath on your own computer, SSH will first log onto the gateway server and then do the ‘log through’ automatically. And this will also configure other programs based on SSH (like the aforementioned SFTP, SCP and Rsync) to use the same mechanism to access goedel.fnwi.uva.nl.

    Initially, this setup means that you have to enter your UvANetID password twice. If you log in separately with SSH on pascal.ic.uva.nl(using ssh UVANETID@pascal.ic.uva.nl) you can use the command vi_authorized_keys to add a public SSH key (using the vi text editor from the terminal), so that you no longer have to enter a password for the pascal.ic.uva.nl login. If you don’t have a public SSH key yet, you can create one by running the command on your own computer (not the web server) ssh-keygen then two files will be created on your own computer in the subdirectory ‘.ssh’, ‘.ssh/id_rsa.pub’ (the public SSH key whose content you are using above) and ‘.ssh/id_rsa’ (a private SSH key that you should not share with others and that SSH uses when logging in).

    On Linux/Windows (with Git Bash)/MacOS systems

    On Linux systems, if you automate the ‘log-through’ for SSH as described above, the other programs based on SSH (like the aforementioned SFTP, SCP and Rsync) will also use this configuration to access the server. So you can just run things like:

    rsync -r public/* phimath:/var/www/illc/events/Phi-Math/
    

    or (since not all windows machines come with the rsync command preinstalled)

    scp -r public/* phimath:/var/www/illc/events/Phi-Math/
    

    Website update

    After logging in, the site files may be found in the filesystem at /var/www/illc/events/Phi-Math. You can modify them there.

    Finally, Marco has a request that he would ask us to keep in mind when managing the site files: can we make sure that all files and subdirectories are created with ‘group-writeable’ permissions, and all subdirectories also with the ‘set-group-id’ flag? This ensures that files and directories belong to the user group www-illc-phimath-science and are editable by all members of this group. To do it manually, we can use the commands

    chmod -R g+w /var/www/illc/events/Phi-Math/*
    chgrp -R www-illc-phimath-science /var/www/illc/events/Phi-Math/*
    find /var/www/illc/events/Phi-Math/* -type d -exec chmod g+s {} +
    

    (Make sure to press enter after pasting this in your terminal!)

    If we forgot this step, the other $\Phi$-Math board member will be unable to edit these files!

    For general hosting Marco has a few more requests, but those are not relevant for $\Phi$-Math, as we are using the ‘create static site on your own computer and then upload the pages’ approach. If we ever want to switch, we should contact Marco first.

    Visit original content creator repository https://github.com/phi-math-website/phi-math-website
  • onnx_transformers

    onnx_transformers

    onnx_transformers

    Accelerated NLP pipelines for fast inference 🚀 on CPU. Built with 🤗Transformers and ONNX runtime.

    Installation:

    pip install git+https://github.com/patil-suraj/onnx_transformers

    Usage:

    NOTE : This is an experimental project and only tested with PyTorch

    The pipeline API is similar to transformers pipeline with just a few differences which are explained below.

    Just provide the path/url to the model and it’ll download the model if needed from the hub and automatically create onnx graph and run inference.

    from onnx_transformers import pipeline
    
    # Initialize a pipeline by passing the task name and 
    # set onnx to True (default value is also True)
    >>> nlp = pipeline("sentiment-analysis", onnx=True)
    >>> nlp("Transformers and onnx runtime is an awesome combo!")
    [{'label': 'POSITIVE', 'score': 0.999721109867096}]  

    Or provide a different model using the model argument.

    from onnx_transformers import pipeline
    
    >>> nlp = pipeline("question-answering", model="deepset/roberta-base-squad2", onnx=True)
    >>> nlp({
      "question": "What is ONNX Runtime ?", 
      "context": "ONNX Runtime is a highly performant single inference engine for multiple platforms and hardware"
    })
    {'answer': 'highly performant single inference engine for multiple platforms and hardware', 'end': 94, 'score': 0.751201868057251, 'start': 18}

    Set onnx to False for standard torch inference.

    You can create Pipeline objects for the following down-stream tasks:

    • feature-extraction: Generates a tensor representation for the input sequence
    • ner: Generates named entity mapping for each word in the input sequence.
    • sentiment-analysis: Gives the polarity (positive / negative) of the whole input sequence. Can be used for any text classification model.
    • question-answering: Provided some context and a question referring to the context, it will extract the answer to the question in the context.
    • zero-shot-classification:

    Calling the pipeline for the first time loads the model, creates the onnx graph, and caches it for future use. Due to this, the first load will take some time. Subsequent calls to the same model will load the onnx graph automatically from the cache.

    The key difference between HF pipeline and onnx_transformers is that the model parameter should always be a string (path or url to the saved model). Also, the zero-shot-classification pipeline here uses roberta-large-mnli as default model instead of facebook/bart-large-mnli as BART is not yet tested with onnx runtime.

    Benchmarks

    Note: For some reason, onnx is slow on colab notebook so you won’t notice any speed-up there. Benchmark it on your own hardware.

    For detailed benchmarks and other information refer to this blog post and notebook.

    To benchmark the pipelines in this repo, see the benchmark_pipelines notebook.

    (Note: These are not yet comprehensive benchmarks.)

    Benchmark feature-extraction pipeline

    Benchmark question-answering pipeline

    Visit original content creator repository https://github.com/patil-suraj/onnx_transformers
  • IoT-Based-Remote-Sensor-Data-Monitoring-and-Actuator-Control

    IoT-Based-Remote-Sensor-Data-Monitoring-and-Actuator-Control

    System Implementation for the Water Tank Example

    Introduction

    >As the current housing systems are moving towards automation, the focus on the systems used within the house is given more focus than the customer requirement. The systems available in the current market are complex and expensive. The objective of the “IoT based remote sensor data monitoring and actuator control” project is to create a partial open-source monitoring system that can be customized based on the individual requirements of the customer which is cheaper than the available market alternatives and user-friendly.

    Componentes Used for this Project

    Sensors Used for this Project

    Process Explaination and Usage of Components

    Since this monitoring system can be used for various applications, it is difficult to reproduce every possible scenario. Hence the prototype for the use case of home water tank was built.

    DHT11 - temperature and humidity readings.
    
    water level sensor - To avoid overflow of the water tank and emergency stop when desired level is reached
    
    MQTT - Use as communication protocol with between  sensor, between sensor and relay
    
    
    Raspberry pi zero - For getting the sensor data
    Raspberry pi 4 - To control relay & power supply
    
    
    HiveMQ broker - It was used to test published/subscribed messages.
    
    
    Influx-DB - It was used as Database For storing water level and DHT11 sensor streaming time-series data.
    
    
    Grafana - It was used for visualizing temperature and humidity streaming time-series data
    

    The process flow of the system is as follows:

    The above picture illustrates the monitoring system of the water level in a smart home network. Water level sensor and DHT11 sensor are connected to Raspberry pi zero.Firstly, Raspberry-pi Zero is gathered streaming data from DHT11 – sensor, the water level – sensor and the ultrasonic sensor. Which values are converted to the action or messages and Those messages are published to the MQTT – Protocol to a particular topic. Next, RaspberryPi- 4 is giving a request to the MQTT protocol in order to get sensor values that are published by Raspberrypi Zero. Moreover, According to the message relay is handled by RaspberryPi- 4 in order to manage the power supply for the motor as well as Raspberry pi- 4 push streaming time-series data and message into the InfluxDB. After that, Grafana is connected with InfluxDB to do streaming time-series data visualization.

    Visit original content creator repository https://github.com/KrishArul26/IoT-Based-Remote-Sensor-Data-Monitoring-and-Actuator-Control
  • react-redux-modal-provider

    React Redux Modal Provider

    react-redux-modal-provider controls the state of your React modal components using Redux.

    Installation

    npm i --save react-redux-modal-provider
    

    Usage

    1. Add <ModalProvider> to your root component.

    import ModalProvider from 'react-redux-modal-provider';
    
    export default render(
      <Provider store={store}>
        <div>
          <App />
          <ModalProvider />
        </div>
      </Provider>,
      document.getElementById('app')
    );

    2. Plug in Modal Provider reducer.

    import { reducer as modalProvider } from 'react-redux-modal-provider';
    
    export default combineReducers({
      modalProvider,
    });

    3. Add modal creation code.

    // app.jsx
    import { showModal } from 'react-redux-modal-provider';
    import MyModal from './myModal';
    
    export default (props) => (
      <div>
        <p>
          Hello world
        </p>
        <button
          type="button"
          onClick={() => showModal(MyModal, { message: 'Hello' })}>
          Present modal
        </button>
      </div>
    );

    4. Handle modal closing.

    // myModal.jsx
    import { Modal } from 'react-bootstrap';
    
    export default (props) => (
      <Modal show={props.show}>
        <Modal.Body>
          {props.message}
        </Modal.Body>
    
        <Modal.Footer>
          <Button onClick={props.hideModal}>Ok</Button>
        </Modal.Footer>
      </Modal>
    );

    show and hideModal props are passed in automatically.

    Implementations

    StackableModalProvider (default)

    This is the default ModalProvider implementation. Each new modal stacks up on top of previous one.

    import { StackableModalProvider } from 'react-redux-modal-provider';
    
    export default render(
      <Provider store={store}>
        <div>
          <App />
          <StackableModalProvider />
        </div>
      </Provider>,
      document.getElementById('app')
    );

    SingleModalProvider

    One modal at a time. Each new modal triggers hideModal on previous one.

    import { SingleModalProvider } from 'react-redux-modal-provider';
    
    export default render(
      <Provider store={store}>
        <div>
          <App />
          <SingleModalProvider />
        </div>
      </Provider>,
      document.getElementById('app')
    );

    How is it different from redux-modal?

    1. You don’t have to think about where your modal component should fit into component tree, because it doesn’t really matter where to render a modal.

    2. No need to connect() your modal component to Redux, unless you want it to be able to create other modals itself.

    Acknowledgements

    License

    MIT

    Visit original content creator repository
    https://github.com/mayask/react-redux-modal-provider

  • ZipArchive

    cZipArchive

    A single-class pure VB6 library for zip archives management

    Usage

    Just include cZipArchive.cls to your project and start using instances of the class like this:

    Simple compression

    With New cZipArchive
        .AddFile App.Path & "\your_file"
        .CompressArchive App.Path & "\test.zip"
    End With
    

    Compress all files and sub-folders

    With New cZipArchive
        .AddFromFolder "C:\Path\To\*.*", Recursive:=True
        .CompressArchive App.Path & "\archive.zip"
    End With
    

    Decompress all files from archive

    With New cZipArchive
        .OpenArchive App.Path & "\test.zip"
        .Extract "C:\Path\To\extract_folder"
    End With
    

    Method Extract can optionally filter on file mask (e.g. Filter:="*.doc"), file index (e.g. Filter:=15) or array of booleans with each entry to decompress index set to True.

    Extract single file to target filename

    OutputTarget can include a target new_filename to be used when extracting a specific file from the archive.

    With New cZipArchive
        .OpenArchive App.Path & "\test.zip"
        .Extract "C:\Path\To\extract_folder\new_filename", Filter:="your_file"
    End With
    

    Get archive entry uncompressed size

    By using FileInfo property keyed on entry filename in first parameter and zipIdxSize like this

    With New cZipArchive
        .OpenArchive App.Path & "\test.zip"
        Debug.Print .FileInfo("report.pdf", zipIdxSize)
    End With
    

    List files in zip archive

    By using FileInfo propery keyed on entry numeric index in first parameter like this

    Dim lIdx            As Long
    With New cZipArchive
        .OpenArchive App.Path & "\test.zip"
        For lIdx = 0 To .FileCount - 1
            Debug.Print "FileName=" & .FileInfo(lIdx, zipIdxFileName) & ", Size=" & .FileInfo(lIdx, zipIdxSize)
        Next
    End With
    

    Here is a list of available values for the second parameter of FileInfo:

    Name
    0 zipIdxFileName
    1 zipIdxAttributes
    2 zipIdxCrc32
    3 zipIdxSize
    4 zipIdxCompressedSize
    5 zipIdxComment
    6 zipIdxLastModified
    7 zipIdxMethod
    8 zipIdxOffset
    9 zipIdxFlags

    Encryption support

    Make sure to set Conditional Compilation in Make tab in project’s properties dialog to include ZIP_CRYPTO = 1 setting for crypto support to get compiled from sources. By default crypto support is not compiled to reduce footprint on the final executable size.

    With New cZipArchive
        .OpenArchive App.Path & "\test.zip"
        .Extract App.Path & "\test", Password:="123456"
    End With
    

    Use Password parameter on AddFile method together with EncrStrength parameter to set crypto used when creating archive.

    EncrStrength Mode
    0 ZipCrypto (default)
    1 AES-128
    2 AES-192
    3 AES-256 (recommended)

    Note that default ZipCrypto encryption is weak but this is the only option which is compatible with Windows Explorer built-in zipfolders support.

    In-memory operations

    Sample utility function ReadBinaryFile in /test/basic/Form1.frm returns byte array with file’s content.

    Dim baZip() As Byte
    With New cZipArchive
        .AddFile ReadBinaryFile("sample.pdf"), "report.pdf"
        .CompressArchive baZip
    End With
    WriteBinaryFile "test.zip", baZip
    

    Method Extract accepts byte array target too.

    Dim baOutput() As Byte
    With New cZipArchive
        .OpenArchive ReadBinaryFile("test.zip")
        .Extract baOutput, Filter:=0    '--- archive's first file only
    End With
    

    ToDo (not supported yet)

    - Deflate64 (de)compressor
    - VBA7 (x64) support
    

    Visit original content creator repository
    https://github.com/wqweto/ZipArchive

  • meetup-rest-api

    meetup-rest-api

    Инструкция по запуску проекта

    1. Скачать и установить контейнер Java-сервлетов(например, Apache Tomcat)
    2. Подготовить базу данных для приложения
      • Создать базу данных PostgreSQL
      • Выполнить скрипты из /databaseScripts/databaseScripts.sql в новой базе данных(для создания схемы)
    3. Создать war-артифакт приложения
      • Выполнить команду ./gradlew war
      • Готовый артифакт приложения появится в директории /build/libs/
    4. Развернуть war-файл в контейнере сервлетов
    5. Запросы к API можно отправлять через URL вида: http://localhost:8080/meetup-rest-api-0-0-1/meetup

    Форматы запросов к API

    • Получить все митапы

      Пример запроса:

    GET http://localhost:8080/meetup-rest-api-0-0-1/meetup HTTP/1.1
    Content-Type: application/json
    
    {
    "filterParameters": {
    "agenda": "meet"
    },
    "sortingParameters":[
    "agenda"
    ]
    }

    где filterParameters и sortingParameters являются опциональными параметрами, задающими фильтрацию и сортировку
    результатов. Поддерживаемые параметры митапов: agenda, dateTime, organizer

    • Получить митап

      Пример запроса:

    GET http://localhost:8080/meetup-rest-api-0-0-1/meetup/{id} HTTP/1.1

    где {id} – id запрашиваемого митапа

    • Добавить новый митап

      Пример запроса:

    PUT http://localhost:8080/meetup-rest-api-0-0-1/meetup HTTP/1.1
    Content-Type: application/json
    
    {
    "agenda": "Very Important Things",
    "description": "We discuss some very important Things",
    "organizer": "An important person",
    "dateTime": "17.11.2022 11:41",
    "location": "An important office"
    }
    • Удалить митап

      Пример запроса:

    DELETE http://localhost:8080/meetup-rest-api-0-0-1/meetup/{id} HTTP/1.1

    где {id} – id удаляемого митапа

    • Изменить митап

      Пример запроса:

    POST http://localhost:8080/meetup-rest-api-0-0-1/meetup/{id} HTTP/1.1
    Content-Type: application/json
    
    {
    "agenda": "Very Important Things",
    "description": "We discuss some very important Things",
    "organizer": "An important person",
    "dateTime": "17.11.2022 11:41",
    "location": "An important office"
    }

    где {id} – id изменяемого митапа

    Visit original content creator repository
    https://github.com/SirJohanot/meetup-rest-api

  • old-armci-mpi

    Authors

    • James Dinan (MPI-2 implementation)
    • Jeff Hammond (MPI-3 implementation)

    Introduction

    This project provides a full, high performance, portable implementation of the ARMCI runtime system using MPI’s remote memory access (RMA) functionality.

    Quality Assurance

    Build Status

    See Travis for failure details. All recent failures have been caused by dependencies (system toolchain or MPI library).

    Installing Only ARMCI-MPI

    ARMCI-MPI uses autoconf and must be configured before compiling:

     $ ./configure
    

    Many configure options are provided, run configure --help for details. After configuring the source tree, the code can be built and installed by running:

     $ make && make install
    

    MPI Library Issues

    The quality of MPI-RMA implementations varies. We recommend that you always use the absolute latest release or release candidate version of MPI unless you are aware of a specific issues that prevents this.

    With the exception of the IBM Blue Gene platforms, all MPI libraries known to the ARMCI-MPI developers support MPI-3 RMA, hence the MPI-2 RMA implementation has been deprecated and is no longer supported. If you are using ARMCI-MPI on a Blue Gene system, use the legacy branch or contact the developers for assistance.

    MPI-3

    As of September 2018, MPICH 3.3b3 and Open-MPI 3.1.2 are passing all of the tests in Travis CI. Information about other implementations will be added here soon.

    As of April 2014, the following implementations were known to work correctly with ARMCI-MPI (MPI-3 version):

    • MPICH 3.0.4 and later on Mac, Linux SMPs and SGI SMPs.
    • MVAPICH2 2.0a and later on Linux InfiniBand clusters.
    • CrayMPI 6.1.0 and later on Cray XC30.
    • SGI MPT 2.09 on SGI SMPs.
    • Open-MPI development version on Mac (set ARMCI_STRIDED_METHOD=IOV and ARMCI_IOV_METHOD=BATCHED)

    Note that a bug in MPICH 3.0 or 3.1 that propagated to MVAPICH2, Cray MPI and Intel MPI affects correctness when windows are backed by shared-memory. This bug affects ARMCI_Rmw and is avoided by setting ARMCI_USE_WIN_ALLOCATE=0 in your runtime environment. This may negatively affect performance in some cases and prevents one from using Casper, hence is not the default.

    MPI-2

    As of August, 2011 the following MPI-2 implementations were known to work correctly with ARMCI-MPI (MPI-2 version):

    • MPICH2 and MPICH 3+
    • MVAPICH2 1.6
    • Cray MPI on Cray XE6
    • IBM MPI on BG/P (set ARMCI_STRIDED_METHOD=IOV and ARMCI_IOV_METHOD=BATCHED for performance reasons)
    • Open-MPI 1.5.4 (set ARMCI_STRIDED_METHOD=IOV and ARMCI_IOV_METHOD=BATCHED for correctness reasons)

    The following MPI-2 implementations are known to fail with ARMCI-MPI:

    • MVAPICH2 prior to 1.6

    Installing Global Arrays with ARMCI-MPI

    To build GA (version 5.2 or later) with ARMCI-MPI (any version), use the configure option --with-armci=$(PATH_TO_ARMCI_MPI) and make sure that you use the same MPI implementation with GA that was used to compile ARMCI-MPI.

    ARMCI-MPI (MPI-3) has been tested extensively with GA since version 5.2.

    Installing NWChem with ARMCI-MPI

    If you are an NWChem user, you can use ${NWCHEM_TOP}/src/tools/install-armci-mpi without having to download or build ARMCI-MPI manually.

    The ARMCI-MPI Test Suite

    ARMCI-MPI includes a set of testing and benchmark programs located under tests/ and benchmarks/. These programs can be compiled and run via:

    $ make check MPIEXEC="mpiexec -n 4"
    

    The MPIEXEC variable is optional and is used to override the default MPI launch command. If you want only to build the test suite, the following target can be used:

    $ make checkprogs
    

    ARMCI-MPI Errata

    Direct access to local buffers

    • Because of MPI-2’s semantics, you are not allowed to access shared memory directly, it must be through put/get. Alternatively you can use the new ARMCI_Access_begin/end() functions.

    • MPI-3 allows direct access provided one uses a synchronization operation afterwards. The ARMCI_Access_begin/end() functions are also valid.

    Progress semantics

    • On some MPI implementations and networks you may need to enable implicit progress. In many cases this is done through an environment variable. For MPICH2: set MPICH_ASYNC_PROGRESS; for MVAPICH2 recompile with --enable-async-progress and set MPICH_ASYNC_PROGRESS; set DCMF_INTERRUPTS=1 for MPI on BGP; etc.

    See this page for more information on activating asynchronous progress in MPI. However, we find that most platforms show no improvement and often a decrease in performance, provided the application makes calls to GA/ARMCI/MPI frequently enough on all processes.

    We recommend the user of Casper for asynchronous progress in ARMCI-MPI. See the Casper website for details.

    Environment Variables:

    Boolean environment variables are enabled when set to a value beginning with ‘t’, ‘T’, ‘y’, ‘Y’, or ‘1’; any other value is interpreted as false.

    Debugging Options

    ARMCI_VERBOSE (boolean)

    Enable extra status output from ARMCI-MPI.

    ARMCI_DEBUG_ALLOC (boolean)

    Turn on extra shared allocation debugging.

    ARMCI_FLUSH_BARRIERS (boolean) (deprecated)

    Enable/disable extra communication flushing in ARMCI_Barrier. Extra flushes are present to help make unsafe DLA safer. (This option is deprecated with the ARMCI-MPI3 implementation.)

    Performance Options

    ARMCI_CACHE_RANK_TRANSLATION (boolean)

    Create a table to more quickly translate between absolute and group ranks.

    ARMCI_PROGRESS_THREAD (boolean)

    Create a Pthread to poke the MPI progress engine.

    ARMCI_PROGRESS_USLEEP (int)

    Argument to usleep() to pause the progress polling loop.

    Noncollective Groups

    ARMCI_NONCOLLECTIVE_GROUPS (boolean)

    Enable noncollective ARMCI group formation; group creation is collective on the output group rather than the parent group.

    Shared Buffer Protection

    ARMCI_SHR_BUF_METHOD = { COPY (default), NOGUARD }

    ARMCI policy for managing shared origin buffers in communication operations: lock the buffer (unsafe, but fast), copy the buffer (safe), or don’t guard the buffer – assume that the system is cache coherent and MPI supports unlocked load/store.

    Strided Options

    ARMCI_STRIDED_METHOD = { DIRECT (default), IOV }

    Select the method for processing strided operations.

    I/O Vector Options

    ARMCI_IOV_METHOD = { AUTO (default), CONSRV, BATCHED, DIRECT }

    Select the IO vector communication strategy: automatic; a “conservative” implementation that does lock/unlock around each operation; an implementation that issues batches of operations within a single lock/unlock epoch; and a direct implementation that generates datatypes for the origin and target and issues a single operation using them.

    ARMCI_IOV_CHECKS (boolean)

    Enable (expensive) IOV safety/debugging checks (not recommended for performance runs).

    ARMCI_IOV_BATCHED_LIMIT = { 0 (default), 1, … }

    Set the maximum number of one-sided operations per epoch for the BATCHED IOV method. Zero (default) is unlimited.

    Visit original content creator repository https://github.com/jeffhammond/old-armci-mpi
  • array-little-endian-factory

    About stdlib…

    We believe in a future in which the web is a preferred environment for numerical computation. To help realize this future, we’ve built stdlib. stdlib is a standard library, with an emphasis on numerical and scientific computation, written in JavaScript (and C) for execution in browsers and in Node.js.

    The library is fully decomposable, being architected in such a way that you can swap out and mix and match APIs and functionality to cater to your exact preferences and use cases.

    When you use stdlib, you can be absolutely certain that you are using the most thorough, rigorous, well-written, studied, documented, tested, measured, and high-quality code out there.

    To join us in bringing numerical computing to the web, get started by checking us out on GitHub, and please consider financially supporting stdlib. We greatly appreciate your continued support!

    littleEndianFactory

    NPM version Build Status Coverage Status

    Return a typed array constructor for creating typed arrays stored in little-endian byte order.

    In contrast to the built-in typed array constructors which store values according to the host platform byte order, the typed array constructors returned by the factory function always access elements in little-endian byte order. Such enforcement can be particularly advantageous when working with memory buffers which do not necessarily follow host platform byte order, such as WebAssembly memory.

    Installation

    npm install @stdlib/array-little-endian-factory

    Alternatively,

    • To load the package in a website via a script tag without installation and bundlers, use the ES Module available on the esm branch (see README).
    • If you are using Deno, visit the deno branch (see README for usage intructions).
    • For use in Observable, or in browser/node environments, use the Universal Module Definition (UMD) build available on the umd branch (see README).

    The branches.md file summarizes the available branches and displays a diagram illustrating their relationships.

    To view installation and usage instructions specific to each branch build, be sure to explicitly navigate to the respective README files on each branch, as linked to above.

    Usage

    var littleEndianFactory = require( '@stdlib/array-little-endian-factory' );

    littleEndianFactory( dtype )

    Returns a typed array constructor for creating typed arrays having a specified data type and stored in little-endian byte order.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    // returns <Function>
    
    var Float32ArrayLE = littleEndianFactory( 'float32' );
    // returns <Function>

    Typed Array Constructor

    TypedArrayLE()

    A typed array constructor which returns a typed array representing an array of values in little-endian byte order.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE();
    // returns <Float64ArrayLE>

    TypedArrayLE( length )

    Returns a typed array having a specified length.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    // returns <Float64ArrayLE>

    TypedArrayLE( typedarray )

    Creates a typed array from another typed array.

    var Float32Array = require( '@stdlib/array-float32' );
    
    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr1 = new Float32Array( [ 0.5, 0.5, 0.5 ] );
    var arr2 = new Float64ArrayLE( arr1 );
    // returns <Float64ArrayLE>
    
    var v = arr2.get( 0 );
    // returns 0.5

    TypedArrayLE( obj )

    Creates a typed array from an array-like object or iterable.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( [ 0.5, 0.5, 0.5 ] );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 0.5

    TypedArrayLE( buffer[, byteOffset[, length]] )

    Returns a typed array view of an ArrayBuffer.

    var ArrayBuffer = require( '@stdlib/array-buffer' );
    
    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var buf = new ArrayBuffer( 32 );
    var arr = new Float64ArrayLE( buf, 0, 4 );
    // returns <Float64ArrayLE>

    Typed Array Properties

    TypedArrayLE.BYTES_PER_ELEMENT

    Number of bytes per view element.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var nbytes = Float64ArrayLE.BYTES_PER_ELEMENT;
    // returns 8

    TypedArrayLE.name

    Typed array constructor name.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var str = Float64ArrayLE.name;
    // returns 'Float64ArrayLE'

    TypedArrayLE.prototype.buffer

    Read-only property which returns the ArrayBuffer referenced by the typed array.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    var buf = arr.buffer;
    // returns <ArrayBuffer>

    TypedArrayLE.prototype.byteLength

    Read-only property which returns the length (in bytes) of the typed array.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    var byteLength = arr.byteLength;
    // returns 40

    TypedArrayLE.prototype.byteOffset

    Read-only property which returns the offset (in bytes) of the typed array from the start of its ArrayBuffer.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    var byteOffset = arr.byteOffset;
    // returns 0

    TypedArrayLE.prototype.BYTES_PER_ELEMENT

    Number of bytes per view element.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    var nbytes = arr.BYTES_PER_ELEMENT;
    // returns 8

    TypedArrayLE.prototype.length

    Read-only property which returns the number of view elements.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 5 );
    var len = arr.length;
    // returns 5

    Typed Array Methods

    TypedArrayLE.from( src[, map[, thisArg]] )

    Creates a new typed array from an array-like object or an iterable.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = Float64ArrayLE.from( [ 1.0, -1.0 ] );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 1.0

    To invoke a function for each src value, provide a callback function.

    function mapFcn( v ) {
        return v * 2.0;
    }
    
    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = Float64ArrayLE.from( [ 1.0, -1.0 ], mapFcn );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 2.0

    A callback function is provided two arguments:

    • value: source value.
    • index: source index.

    To set the callback execution context, provide a thisArg.

    function mapFcn( v ) {
        this.count += 1;
        return v * 2.0;
    }
    
    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var ctx = {
        'count': 0
    };
    
    var arr = Float64ArrayLE.from( [ 1.0, -1.0 ], mapFcn, ctx );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 2.0
    
    var n = ctx.count;
    // returns 2

    TypedArrayLE.of( element0[, element1[, …elementN]] )

    Creates a new typed array from a variable number of arguments.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = Float64ArrayLE.of( 1.0, -1.0 );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 1.0

    TypedArrayLE.prototype.get( i )

    Returns an array element located at a nonnegative integer position (index) i.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 10 );
    
    // Set the first element:
    arr.set( 1.0, 0 );
    
    // Get the first element:
    var v = arr.get( 0 );
    // returns 1.0

    If provided an out-of-bounds index, the method returns undefined.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( 10 );
    
    var v = arr.get( 100 );
    // returns undefined

    TypedArrayLE.prototype.set( arr[, offset] )

    Sets array elements.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( [ 1.0, 2.0, 3.0 ] );
    // returns <Float64ArrayLE>
    
    var v = arr.get( 0 );
    // returns 1.0
    
    v = arr.get( 1 );
    // returns 2.0
    
    // Set the first two array elements:
    arr.set( [ 4.0, 5.0 ] );
    
    v = arr.get( 0 );
    // returns 4.0
    
    v = arr.get( 1 );
    // returns 5.0

    By default, the method starts writing values at the first array index. To specify an alternative index, provide an index offset.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( [ 1.0, 2.0, 3.0 ] );
    // returns <Float64ArrayLE>
    
    // Set the last two array elements:
    arr.set( [ 4.0, 5.0 ], 1 );
    
    var v = arr.get( 1 );
    // returns 4.0
    
    v = arr.get( 2 );
    // returns 5.0

    A few notes:

    • If i is out-of-bounds, the method throws an error.
    • If a target array cannot accommodate all values (i.e., the length of source array plus i exceeds the target array length), the method throws an error.
    • If provided a typed array which shares an ArrayBuffer with the target array, the method will intelligently copy the source range to the destination range.

    TypedArrayLE.prototype.toString()

    Serializes an array as a string.

    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    var arr = new Float64ArrayLE( [ 1.0, 2.0, 3.0 ] );
    
    var str = arr.toString();
    // returns '1,2,3'

    Notes

    • While returned constructors strive to maintain (but do not guarantee) consistency with typed arrays, significant deviations from ECMAScript-defined typed array behavior are as follows:

      • Constructors not require the new operator.
      • Accessing array elements using bracket syntax (e.g., X[i]) is not supported. Instead, one must use the .get() method.

    Examples

    var Float64Array = require( '@stdlib/array-float64' );
    var logEach = require( '@stdlib/console-log-each' );
    var littleEndianFactory = require( '@stdlib/array-little-endian-factory' );
    
    var Float64ArrayLE = littleEndianFactory( 'float64' );
    
    // Create a typed array by specifying a length:
    var out = new Float64ArrayLE( 3 );
    logEach( '%s', out );
    
    // Create a typed array from an array:
    var arr = [ 1.0, -1.0, -3.14, 3.14, 0.5, 0.5 ];
    out = new Float64ArrayLE( arr );
    logEach( '%s', out );
    
    // Create a typed array from an array buffer:
    arr = new Float64Array( [ 1.0, -1.0, -3.14, 3.14, 0.5, 0.5 ] ); // host byte order
    out = new Float64ArrayLE( arr.buffer );
    logEach( '%s', out );
    
    // Create a typed array from an array buffer view:
    arr = new Float64Array( [ 1.0, -1.0, -3.14, 3.14, 0.5, 0.5 ] ); // host byte order
    out = new Float64ArrayLE( arr.buffer, 8, 2 );
    logEach( '%s', out );

    Notice

    This package is part of stdlib, a standard library for JavaScript and Node.js, with an emphasis on numerical and scientific computing. The library provides a collection of robust, high performance libraries for mathematics, statistics, streams, utilities, and more.

    For more information on the project, filing bug reports and feature requests, and guidance on how to develop stdlib, see the main project repository.

    Community

    Chat


    License

    See LICENSE.

    Copyright

    Copyright © 2016-2025. The Stdlib Authors.

    Visit original content creator repository https://github.com/stdlib-js/array-little-endian-factory