Blog

  • VirtualRehabTokenSaleDocs

    Virtual Rehab Token Sale Documentation

    This repository includes all documentation pertaining to the Virtual Rehab Token Sale.

    White Paper

    The White Paper provides a comprehensive overview of the Virtual Rehab project.
    

    Light Paper

    The Light Paper provides a summarized version of the Virtual Rehab White Paper.
    

    One Pager

    The One Pager provides a high level overview of the Virtual Rehab project.
    

    Pitch Deck

    The Pitch Deck provides a presentation mode overview of the Virtual Rehab project.
    

    Thank you in advance for taking the time to read through the Virtual Rehab documentation.

    About Virtual Rehab

    Virtual Rehab’s evidence-based solution leverages the advancements in virtual reality, artificial intelligence, and blockchain technologies for psychological rehabilitation of vulnerable populations (pain management, prevention of substance use disorders, enhancement of autistic individuals’ communication skills, and rehabilitation of repeat offenders).

    Virtual Rehab’s all-encompassing solution covers the following pillars:

    • Virtual Reality – A virtual simulation of the real world using cognitive behavior and exposure therapy to trigger and to cope with temptations
    • Artificial Intelligence – A unique expert system to identify areas of risk, to make treatment recommendations, and to predict post-therapy behavior
    • Blockchain – A secure network to ensure privacy and decentralization of all data and all information relevant to vulnerable populations
    • VRH Token – An ERC20 utility token that empowers users to purchase services and to be rewarded for seeking help through Virtual Rehab’s online portal

    The Virtual Rehab (VRH) token has been created as a centralized currency to be used within the Virtual Rehab network. Users will be able to purchase and sell VRH tokens in exchanges. The token follows the standards of Ethereum ERC20 standard token. Its design follows the widely adopted token implementation standards. This allows token holders to easily store and manage their VRH tokens using existing solutions including ERC20-compatible Ethereum wallets.

    When dealing with the most vulnerable populations out there, privacy and security of information/data shared, become extremely important. Unfortunately, as of today, this information is publicly available and could be accessed through online databases which expose the identities of these vulnerable populations. As a result, this prevents these populations from re-integrating back into society and becoming an effective part of it as well.

    Fortunately, this level of privacy/data protection can all be made possible when integrating blockchain technology as part of the Virtual Rehab solution. We will not gather the first name or the last name of these vulnerable populations – they will be associated with a wallet address. The only information which will be gathered are the age, gender, race, biometrics (heart rate, blood pressure, biodermal activity), and eye-tracking. This will ensure complete HIPAA compliance and anonymity of these vulnerable populations information/data.

    Moreover, Virtual Rehab is solving an even bigger problem which is data sharing among medical institutions, correctional departments, researchers, and patients worldwide. As of today, this data cannot be accessed due to privacy and patient protection laws. However, with our Virtual Rehab solution, everyone will have access to this information, which is completely anonymous and would allow for a database of input data and metrics to be accessed, which can further enhance existing research in the area of mental health through the global collaboration of researchers and medical professionals from around the world. Patients will also be able to access this data and find some synergies or relations with their existing symptoms and attempt to apply some best practices accordingly.

    The VRH token has four use cases:

    • Allows users to order and download programs from Virtual Rehab Online Portal
    • Allows users to request additional analysis (using Virtual Rehab unique expert system, which uses Artificial Intelligence) of executed programs
    • Incentivizes users with VRH tokens reward for seeking help and counselling from medical doctors, psychologists, and therapists (Proof of Therapy)
    • Allows users to pay for services received at the Virtual Rehab Therapy Center (VRTC)

    The VRH token’s utility has been confirmed by a Maltese law firm to be a Virtual Financial Asset (VFA) and not a financial instrument per the below Tweet:

    https://twitter.com/ViRehab/status/1130867718276734976

    Some of Virtual Rehab’s notable successes include the following:

    • Evidence-based solution with proven efficacy results approved by physicians, psychologists, and therapists
    • 87% of participating patients have shown an overall improvement across various metrics
    • Described by US Digital Government Head as a “capability that is very very promising for public services”
    • Only VR/AI company included in the US Department of Justice, Institute of Corrections Environmental Scan report
    • Partnership agreements in-place across the North America, Europe, Middle East, and APAC regions
    • Only company to represent Canada as part of the Canadian Delegation to Arab Health
    • Selected as one of Canada’s most promising high-growth life sciences companies (Dose of the Valley, CA)
    • Featured by Microsoft’s leadership team at the Microsoft Inspire Innovation Session
    • Nominated by The Wall Street Journal for the WSJ D.LIVE Startup Showcase (Laguna Beach, CA)
    • Ranked by Spanish media as the first option for training correctional officers and rehabilitation of offenders using virtual reality
    • Founder awarded with the “Expert” status by the United Nations Global Sustainable Consumption & Production (SCP) Programme with focus on Sustainable Lifestyle and Education
    • Selected as one of the top innovative companies in Montreal, Quebec, Canada and will be included within the Montreal Innovation Ecosystem publication
    • Ranked 1st in “Top 10 To Watch” by England’s 21Cryptos Magazine – the leading Cryptocurrency and Blockchain magazine, with monthly content from dozens of the industry’s experts, public figures, and traders
    • Featured by the media across 28 countries worldwide

    Thank you for taking the time to learn more about Virtual Rehab. We appreciate your interest in what we do.

    Tokenomics

    Type Description
    Token Type ERC20 Token
    Name Virtual Rehab Token
    Ticker VRH
    Decimals 18
    Total Supply 336.225 million VRH
    Maximum Supply 400 million VRH
    Circulating Supply TBA
    Contributions Accepted In ETH / BNB / CS
    Contract Address 0x0914b7ae021c229b7A51fF936f8FFc8C81fbCEA7

    Links

    Visit original content creator repository
    https://github.com/ViRehab/VirtualRehabTokenSaleDocs

  • clonk_transpilation

    CLONK-CoupLing tOpology beNchmarKs

    Link to paper: https://ieeexplore.ieee.org/abstract/document/10071036

    Tests Format Check

    📌 Project Overview

    • Overview: This project innovates in the field of superconducting quantum computers by employing a SNAIL modulator to mitigate noise challenges prevalent in NISQ systems. It enhances qubit coupling and overall performance, offering a robust alternative to traditional designs.

    • Objective: To develop, optimize, and benchmark a SNAIL-based quantum computing architecture that excels in noise management and qubit coupling efficiency.

    • What’s Inside:

      • ConfigurableBackendV2: A module for programmatically creating qubit-connectivity backends.
      • Transpilation Pass Manager: Designed for quick basis gate translation.
      • Circuit and Backend Suites: Includes circuit_suite.py and backend_suite_v3.py for various benchmarks.
      • Demo: See HPCA_artifact.ipynb for a comprehensive demo focused on topology diameters and data movement operations.

    🚀 Getting Started

    Install using pip:

    pip install -e git+https://github.com/Pitt-JonesLab/clonk_transpilation#egg=clonk_transpilation
    

    or get started by exploring the main demo located at HPCA_artifact.ipynb.

    📋 Prerequisites

    • Set up everything using make init command.

    • Package Dependencies:

      • Dependency for topology plots: sudo apt install graphviz

    💻🐒 Usage

    Backend Creation

    In /src/clonk/backend_utils/mock_backends, target topologies are created by implementing ConfigurableFakeBackendV2 which is an abstract class defined in src/clonk/backend_utils/configurable_backend_v2.py

    from src.clonk.backend_utils.topology_visualization import pretty_print
    from src.clonk.backend_utils.mock_backends import FakeModular
    
    pb = FakeModular(module_size=5, children=4, total_levels=2)
    pretty_print(pb)

    png

    Decomposition Transpiler Pass

    The pass manager for data collection is defined in /src/clonk/utils/transpiler_passes/pass_manager_v3.py, which needs additional decomposition passes for $\sqrt{\texttt{iSwap}}$ and $\texttt{SYC}$ gates. Next, we test the $\sqrt{\texttt{iSwap}}$ pass by showing that it’s Haar score tends to 2.21 as expected

    from qiskit.quantum_info.random import random_unitary
    from qiskit import QuantumCircuit
    from src.clonk.utils.riswap_gates.riswap import RiSwapGate
    from src.clonk.utils.transpiler_passes.weyl_decompose import RootiSwapWeylDecomposition
    from qiskit.transpiler.passes import CountOps
    from qiskit.transpiler import PassManager
    from tqdm import tqdm
    
    N = 2000
    basis_gate = RiSwapGate(0.5)
    
    pm0 = PassManager()
    pm0.append(RootiSwapWeylDecomposition(basis_gate=basis_gate))
    pm0.append(CountOps())
    
    res = 0
    for _ in tqdm(range(N)):
        qc = QuantumCircuit(2)
        qc.append(random_unitary(dims=4), [0, 1])
        pm0.run(qc)
        res += pm0.property_set["count_ops"]["riswap"]
    print("Haar score:", res / N)
    100%|██████████| 2000/2000 [00:10<00:00, 189.84it/s]
    
    Haar score: 2.1925
    

    Creating a Benchmark

    We need to define the circuits, circuit sizes, topologies, and basis gates we want to transpile to and plot results for. We do this by wrapping the backend object and its transpiler pass manager into an object that handles data collection in src/clonk/benchmark_suite/backend_suite_v3.py. The set used for data collection in the paper are in src/clonk/benchmark_suite/backend_suite_v2.py. The relevant change is that ‘v3’ uses a slightly more optimized pass manager (optimized for time). To reproduce the results we include the v2 versions which can regenerate the data from scratch by setting the overwrite parameter

    from src.clonk.benchmark_suite.backend_suite_v3 import simple_backends_v3
    
    print([backend.label for backend in simple_backends_v3])
    ['Heavy-Hex-cx-smallv3', 'Square-Lattice-syc-smallv3', 'Modular-riswap-smallv3', 'Corral-8-(0, 0)-riswap-smallv3']
    

    Note: We make a modification in Supermarq for the efficient generation of QAOA circuits that eliminates the need to optimize the 1Q gate parameters, but will not effect our results. To fix in interim, comment out supermarq/benchmarks/qaoa_vanilla_proxy.py, line 40 and replace with:

    #self.params = self._gen_angles()
    self.params = np.random.uniform(size=2) * 2 * np.pi
    from src.clonk.benchmark_suite.circuit_suite import circuits
    
    q_size = 4
    circuits["QAOA_Vanilla"].circuit_lambda(q_size).decompose().draw()
    global phase: 3.757
          ┌─────────┐                                                    »
     q_0: ┤ U2(0,π) ├──■─────────────────■────■──────────────────■────■──»
          ├─────────┤┌─┴─┐┌───────────┐┌─┴─┐  │                  │    │  »
     q_1: ┤ U2(0,π) ├┤ X ├┤ U1(3.757) ├┤ X ├──┼──────────────────┼────┼──»
          ├─────────┤└───┘└───────────┘└───┘┌─┴─┐┌────────────┐┌─┴─┐  │  »
     q_2: ┤ U2(0,π) ├───────────────────────┤ X ├┤ U1(-3.757) ├┤ X ├──┼──»
          ├─────────┤                       └───┘└────────────┘└───┘┌─┴─┐»
     q_3: ┤ U2(0,π) ├───────────────────────────────────────────────┤ X ├»
          └─────────┘                                               └───┘»
    m0: 4/═══════════════════════════════════════════════════════════════»
                                                                         »
    «                         ┌────────────┐             ┌─┐          »
    « q_0: ────────────────■──┤ R(10.21,0) ├─────────────┤M├──────────»
    «                      │  └────────────┘             └╥┘          »
    « q_1: ────────────────┼────────■─────────────────────╫───■───────»
    «                      │        │                     ║   │       »
    « q_2: ────────────────┼────────┼─────────────────────╫───┼────■──»
    «      ┌────────────┐┌─┴─┐    ┌─┴─┐     ┌───────────┐ ║ ┌─┴─┐┌─┴─┐»
    « q_3: ┤ U1(-3.757) ├┤ X ├────┤ X ├─────┤ U1(3.757) ├─╫─┤ X ├┤ X ├»
    «      └────────────┘└───┘    └───┘     └───────────┘ ║ └───┘└───┘»
    «m0: 4/═══════════════════════════════════════════════╩═══════════»
    «                                                     0           »
    «                                                                              
    « q_0: ────────────────────────────────────────────────────────────────────────
    «                                                          ┌────────────┐┌─┐   
    « q_1: ─────────────────────────■───────────────────────■──┤ R(10.21,0) ├┤M├───
    «                             ┌─┴─┐     ┌────────────┐┌─┴─┐├────────────┤└╥┘┌─┐
    « q_2: ────────────────■──────┤ X ├─────┤ U1(-3.757) ├┤ X ├┤ R(10.21,0) ├─╫─┤M├
    «      ┌────────────┐┌─┴─┐┌───┴───┴────┐└────┬─┬─────┘└───┘└────────────┘ ║ └╥┘
    « q_3: ┤ U1(-3.757) ├┤ X ├┤ R(10.21,0) ├─────┤M├──────────────────────────╫──╫─
    «      └────────────┘└───┘└────────────┘     └╥┘                          ║  ║ 
    «m0: 4/═══════════════════════════════════════╩═══════════════════════════╩══╩═
    «                                             3                           1  2 
    """Example:"""
    
    from src.clonk.benchmark_suite.main_plotting import benchmark, plot_wrap
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=simple_backends_v3,
            circuit_generator=circuit_gen,
            q_range=[4, 6, 8, 12, 14, 16],
            continuously_save=1,
            overwrite=0,  # NOTE: turn this to 1 if you want to scrap the saved data and recollect a new batch
            repeat=1,
        )
    
    # NOTE when plotting use motivation = 1 to plot SWAP counts, and motivation = 0 to plot gate durations
    plot_wrap(simple_backends_v3, circuits.keys(), motivation=True, plot_average=True)
    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    📊 Results & Comparisons

    """Fig 4"""
    from src.clonk.benchmark_suite.backend_suite_v2 import motivation_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=motivation_backends,
            circuit_generator=circuit_gen,
            q_range=motivation_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(motivation_backends, circuits.keys(), motivation=True, plot_average=True)
    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 10"""
    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=small_results_backends,
            circuit_generator=circuit_gen,
            q_range=small_results_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(small_results_backends, circuits.keys(), motivation=True, plot_average=True)
    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    # """Fig 12"""
    from src.clonk.benchmark_suite.backend_suite_v2 import results_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=results_backends,
            circuit_generator=circuit_gen,
            q_range=results_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(results_backends, circuits.keys(), motivation=True, plot_average=True)
    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 13"""
    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_part2_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=small_results_part2_backends,
            circuit_generator=circuit_gen,
            q_range=small_results_part2_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(
        small_results_part2_backends, circuits.keys(), motivation=False, plot_average=True
    )
    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 14"""
    plot_wrap(results_backends, circuits.keys(), motivation=False, plot_average=True)

    png

    Finally, we use a quick calculation which converts the transpiled circuit data into useable numbers for the fidelity models.

    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_part2_backendsv2
    from qiskit.converters import circuit_to_dag
    import numpy as np
    
    ignore = ["u"]
    
    for circuit_gen in circuits.values():  # [circuits['Quantum_Volume']]:
        print(circuit_gen.label)
        qc = circuit_gen.circuit_lambda(16)
    
        for backend in small_results_part2_backendsv2:
            print(backend.label)
            c = backend.pass_manager.run(qc)  # transpile :)
            d = circuit_to_dag(c)
            w = d.qubits  # if use wires error bc returns classical bits
    
            qubit_wire_counts = np.zeros(20)
            for i, wi in enumerate(w):
                for node in d.nodes_on_wire(wi, only_ops=True):
                    if node.name in ignore:
                        continue
                    # count the 2Q ops
                    if node.name in ["cx", "fSim", "riswap"]:
                        qubit_wire_counts[i] += 1
    
            # print(qubit_wire_counts)
            print(sum(qubit_wire_counts))

    Approximate Decomposition

    Need to have cloned this fork https://github.com/evmckinney9/NuOp to function. Since it is not packaged, I put it in a directory called external to make the imports inside nuop_script.py work.

    from src.clonk.benchmark_suite.nuop_script import create_plot2, collect_random2q_data
    N = 20
    base_fidelity_list = [0.97, 0.98, 1 - 10e-3, 1 - 5e-3, 1 - 10e-4, 1]
    filename = f"src/clonk/benchmark_suite/data-archive2/data1_random.h5"  # NOTE preloaded, change name of file to recollect
    gate_error, decomp_error, fidelity_error = collect_random2q_data(
        1 - 10e-3, N=N, mode="random", fn=filename
    )
    create_plot2(gate_error, decomp_error, fidelity_error, plot_bool=0, fn=filename);

    png

    create_plot2(gate_error, decomp_error, fidelity_error, plot_bool=1, fn=filename);

    png

    📚 Reference

    @inproceedings{mckinney2023co,
      title={Co-Designed Architectures for Modular Superconducting Quantum Computers},
      author={McKinney, Evan and Xia, Mingkang and Zhou, Chao and Lu, Pinlei and Hatridge, Michael and Jones, Alex K},
      booktitle={2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA)},
      pages={759--772},
      year={2023},
      organization={IEEE}
    }
    Visit original content creator repository https://github.com/Pitt-JonesLab/clonk_transpilation
  • clonk_transpilation

    CLONK-CoupLing tOpology beNchmarKs

    Link to paper: https://ieeexplore.ieee.org/abstract/document/10071036

    Tests
    Format Check

    📌 Project Overview

    • Overview: This project innovates in the field of superconducting quantum computers by employing a SNAIL modulator to mitigate noise challenges prevalent in NISQ systems. It enhances qubit coupling and overall performance, offering a robust alternative to traditional designs.

    • Objective: To develop, optimize, and benchmark a SNAIL-based quantum computing architecture that excels in noise management and qubit coupling efficiency.

    • What’s Inside:

      • ConfigurableBackendV2: A module for programmatically creating qubit-connectivity backends.
      • Transpilation Pass Manager: Designed for quick basis gate translation.
      • Circuit and Backend Suites: Includes circuit_suite.py and backend_suite_v3.py for various benchmarks.
      • Demo: See HPCA_artifact.ipynb for a comprehensive demo focused on topology diameters and data movement operations.

    🚀 Getting Started

    Install using pip:

    pip install -e git+https://github.com/Pitt-JonesLab/clonk_transpilation#egg=clonk_transpilation
    

    or get started by exploring the main demo located at HPCA_artifact.ipynb.

    📋 Prerequisites

    • Set up everything using make init command.

    • Package Dependencies:

      • Dependency for topology plots: sudo apt install graphviz

    💻🐒 Usage

    Backend Creation

    In /src/clonk/backend_utils/mock_backends, target topologies are created by implementing ConfigurableFakeBackendV2 which is an abstract class defined in src/clonk/backend_utils/configurable_backend_v2.py

    from src.clonk.backend_utils.topology_visualization import pretty_print
    from src.clonk.backend_utils.mock_backends import FakeModular
    
    pb = FakeModular(module_size=5, children=4, total_levels=2)
    pretty_print(pb)

    png

    Decomposition Transpiler Pass

    The pass manager for data collection is defined in /src/clonk/utils/transpiler_passes/pass_manager_v3.py, which needs additional decomposition passes for $\sqrt{\texttt{iSwap}}$ and $\texttt{SYC}$ gates. Next, we test the $\sqrt{\texttt{iSwap}}$ pass by showing that it’s Haar score tends to 2.21 as expected

    from qiskit.quantum_info.random import random_unitary
    from qiskit import QuantumCircuit
    from src.clonk.utils.riswap_gates.riswap import RiSwapGate
    from src.clonk.utils.transpiler_passes.weyl_decompose import RootiSwapWeylDecomposition
    from qiskit.transpiler.passes import CountOps
    from qiskit.transpiler import PassManager
    from tqdm import tqdm
    
    N = 2000
    basis_gate = RiSwapGate(0.5)
    
    pm0 = PassManager()
    pm0.append(RootiSwapWeylDecomposition(basis_gate=basis_gate))
    pm0.append(CountOps())
    
    res = 0
    for _ in tqdm(range(N)):
        qc = QuantumCircuit(2)
        qc.append(random_unitary(dims=4), [0, 1])
        pm0.run(qc)
        res += pm0.property_set["count_ops"]["riswap"]
    print("Haar score:", res / N)

    100%|██████████| 2000/2000 [00:10<00:00, 189.84it/s]
    
    Haar score: 2.1925
    

    Creating a Benchmark

    We need to define the circuits, circuit sizes, topologies, and basis gates we want to transpile to and plot results for. We do this by wrapping the backend object and its transpiler pass manager into an object that handles data collection in src/clonk/benchmark_suite/backend_suite_v3.py. The set used for data collection in the paper are in src/clonk/benchmark_suite/backend_suite_v2.py. The relevant change is that ‘v3’ uses a slightly more optimized pass manager (optimized for time). To reproduce the results we include the v2 versions which can regenerate the data from scratch by setting the overwrite parameter

    from src.clonk.benchmark_suite.backend_suite_v3 import simple_backends_v3
    
    print([backend.label for backend in simple_backends_v3])
    ['Heavy-Hex-cx-smallv3', 'Square-Lattice-syc-smallv3', 'Modular-riswap-smallv3', 'Corral-8-(0, 0)-riswap-smallv3']
    

    Note: We make a modification in Supermarq for the efficient generation of QAOA circuits that eliminates the need to optimize the 1Q gate parameters, but will not effect our results. To fix in interim, comment out supermarq/benchmarks/qaoa_vanilla_proxy.py, line 40 and replace with:

    #self.params = self._gen_angles()
    self.params = np.random.uniform(size=2) * 2 * np.pi

    from src.clonk.benchmark_suite.circuit_suite import circuits
    
    q_size = 4
    circuits["QAOA_Vanilla"].circuit_lambda(q_size).decompose().draw()
    global phase: 3.757
          ┌─────────┐                                                    »
     q_0: ┤ U2(0,π) ├──■─────────────────■────■──────────────────■────■──»
          ├─────────┤┌─┴─┐┌───────────┐┌─┴─┐  │                  │    │  »
     q_1: ┤ U2(0,π) ├┤ X ├┤ U1(3.757) ├┤ X ├──┼──────────────────┼────┼──»
          ├─────────┤└───┘└───────────┘└───┘┌─┴─┐┌────────────┐┌─┴─┐  │  »
     q_2: ┤ U2(0,π) ├───────────────────────┤ X ├┤ U1(-3.757) ├┤ X ├──┼──»
          ├─────────┤                       └───┘└────────────┘└───┘┌─┴─┐»
     q_3: ┤ U2(0,π) ├───────────────────────────────────────────────┤ X ├»
          └─────────┘                                               └───┘»
    m0: 4/═══════════════════════════════════════════════════════════════»
                                                                         »
    «                         ┌────────────┐             ┌─┐          »
    « q_0: ────────────────■──┤ R(10.21,0) ├─────────────┤M├──────────»
    «                      │  └────────────┘             └╥┘          »
    « q_1: ────────────────┼────────■─────────────────────╫───■───────»
    «                      │        │                     ║   │       »
    « q_2: ────────────────┼────────┼─────────────────────╫───┼────■──»
    «      ┌────────────┐┌─┴─┐    ┌─┴─┐     ┌───────────┐ ║ ┌─┴─┐┌─┴─┐»
    « q_3: ┤ U1(-3.757) ├┤ X ├────┤ X ├─────┤ U1(3.757) ├─╫─┤ X ├┤ X ├»
    «      └────────────┘└───┘    └───┘     └───────────┘ ║ └───┘└───┘»
    «m0: 4/═══════════════════════════════════════════════╩═══════════»
    «                                                     0           »
    «                                                                              
    « q_0: ────────────────────────────────────────────────────────────────────────
    «                                                          ┌────────────┐┌─┐   
    « q_1: ─────────────────────────■───────────────────────■──┤ R(10.21,0) ├┤M├───
    «                             ┌─┴─┐     ┌────────────┐┌─┴─┐├────────────┤└╥┘┌─┐
    « q_2: ────────────────■──────┤ X ├─────┤ U1(-3.757) ├┤ X ├┤ R(10.21,0) ├─╫─┤M├
    «      ┌────────────┐┌─┴─┐┌───┴───┴────┐└────┬─┬─────┘└───┘└────────────┘ ║ └╥┘
    « q_3: ┤ U1(-3.757) ├┤ X ├┤ R(10.21,0) ├─────┤M├──────────────────────────╫──╫─
    «      └────────────┘└───┘└────────────┘     └╥┘                          ║  ║ 
    «m0: 4/═══════════════════════════════════════╩═══════════════════════════╩══╩═
    «                                             3                           1  2 

    """Example:"""
    
    from src.clonk.benchmark_suite.main_plotting import benchmark, plot_wrap
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=simple_backends_v3,
            circuit_generator=circuit_gen,
            q_range=[4, 6, 8, 12, 14, 16],
            continuously_save=1,
            overwrite=0,  # NOTE: turn this to 1 if you want to scrap the saved data and recollect a new batch
            repeat=1,
        )
    
    # NOTE when plotting use motivation = 1 to plot SWAP counts, and motivation = 0 to plot gate durations
    plot_wrap(simple_backends_v3, circuits.keys(), motivation=True, plot_average=True)

    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    📊 Results & Comparisons

    """Fig 4"""
    from src.clonk.benchmark_suite.backend_suite_v2 import motivation_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=motivation_backends,
            circuit_generator=circuit_gen,
            q_range=motivation_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(motivation_backends, circuits.keys(), motivation=True, plot_average=True)

    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 10"""
    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=small_results_backends,
            circuit_generator=circuit_gen,
            q_range=small_results_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(small_results_backends, circuits.keys(), motivation=True, plot_average=True)

    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    # """Fig 12"""
    from src.clonk.benchmark_suite.backend_suite_v2 import results_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=results_backends,
            circuit_generator=circuit_gen,
            q_range=results_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(results_backends, circuits.keys(), motivation=True, plot_average=True)

    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 13"""
    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_part2_backends
    
    for circuit_gen in circuits.values():
        benchmark(
            backends=small_results_part2_backends,
            circuit_generator=circuit_gen,
            q_range=small_results_part2_backends[0].q_range,
            continuously_save=True,
            overwrite=False,
            repeat=1,
        )
    plot_wrap(
        small_results_part2_backends, circuits.keys(), motivation=False, plot_average=True
    )

    Starting benchmark for Quantum_Volume
    Starting benchmark for QFT
    Starting benchmark for QAOA_Vanilla
    Starting benchmark for TIM_Hamiltonian
    Starting benchmark for Adder
    Starting benchmark for GHZ
    

    png

    """Fig 14"""
    plot_wrap(results_backends, circuits.keys(), motivation=False, plot_average=True)

    png

    Finally, we use a quick calculation which converts the transpiled circuit data into useable numbers for the fidelity models.

    from src.clonk.benchmark_suite.backend_suite_v2 import small_results_part2_backendsv2
    from qiskit.converters import circuit_to_dag
    import numpy as np
    
    ignore = ["u"]
    
    for circuit_gen in circuits.values():  # [circuits['Quantum_Volume']]:
        print(circuit_gen.label)
        qc = circuit_gen.circuit_lambda(16)
    
        for backend in small_results_part2_backendsv2:
            print(backend.label)
            c = backend.pass_manager.run(qc)  # transpile :)
            d = circuit_to_dag(c)
            w = d.qubits  # if use wires error bc returns classical bits
    
            qubit_wire_counts = np.zeros(20)
            for i, wi in enumerate(w):
                for node in d.nodes_on_wire(wi, only_ops=True):
                    if node.name in ignore:
                        continue
                    # count the 2Q ops
                    if node.name in ["cx", "fSim", "riswap"]:
                        qubit_wire_counts[i] += 1
    
            # print(qubit_wire_counts)
            print(sum(qubit_wire_counts))

    Approximate Decomposition

    Need to have cloned this fork https://github.com/evmckinney9/NuOp to function. Since it is not packaged, I put it in a directory called external to make the imports inside nuop_script.py work.

    from src.clonk.benchmark_suite.nuop_script import create_plot2, collect_random2q_data

    N = 20
    base_fidelity_list = [0.97, 0.98, 1 - 10e-3, 1 - 5e-3, 1 - 10e-4, 1]
    filename = f"src/clonk/benchmark_suite/data-archive2/data1_random.h5"  # NOTE preloaded, change name of file to recollect
    gate_error, decomp_error, fidelity_error = collect_random2q_data(
        1 - 10e-3, N=N, mode="random", fn=filename
    )
    create_plot2(gate_error, decomp_error, fidelity_error, plot_bool=0, fn=filename);

    png

    create_plot2(gate_error, decomp_error, fidelity_error, plot_bool=1, fn=filename);

    png

    📚 Reference

    @inproceedings{mckinney2023co,
      title={Co-Designed Architectures for Modular Superconducting Quantum Computers},
      author={McKinney, Evan and Xia, Mingkang and Zhou, Chao and Lu, Pinlei and Hatridge, Michael and Jones, Alex K},
      booktitle={2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA)},
      pages={759--772},
      year={2023},
      organization={IEEE}
    }

    Visit original content creator repository
    https://github.com/Pitt-JonesLab/clonk_transpilation

  • css-exercises

    Visit original content creator repository
    https://github.com/weeezik/css-exercises

  • burn-to-get-nft-sc

    BurnToGet_program

    This is the anchor program to get NFTs by burning other tier’s NFT

    📞 Cᴏɴᴛᴀᴄᴛ ᴍᴇ Oɴ ʜᴇʀᴇ: 👆🏻

    Email Twitter Discord Telegram

    Install Dependencies

    • Install node and yarn
    • Install ts-node as global command
    • Confirm the solana wallet preparation: /home/---/.config/solana/id.json in test case

    Usage

    • Main script source for all functionality is here: /cli/script.ts
    • Program account types are declared here: /cli/types.ts
    • Idl to make the JS binding easy is here: /target/types/burning.ts

    Able to test the script functions working in this way.

    • Change commands properly in the main functions of the script.ts file to call the other functions
    • Confirm the ANCHOR_WALLET environment variable of the ts-node script in package.json
    • Run yarn ts-node

    Features

    How to deploy this program?

    First of all, you have to git clone in your PC. In the folder burning, in the terminal

    1. yarn

    2. anchor build In the last sentence you can see:

    To deploy this program:
      $ solana program deploy /home/ubuntu/apollo/B2S_Contract/burning/target/deploy/burning.so
    The program address will default to this keypair (override with --program-id):
      /home/ubuntu/apollo/B2S_Contract/burning/target/deploy/burning-keypair.json
    
    1. solana-keygen pubkey /home/ubuntu/apollo/B2S_Contract/burning/target/deploy/burning-keypair.json

    2. You can get the pubkey of the program ID : ex."5N...x6k"

    3. Please add this pubkey to the lib.rs line 17: declare_id!("5N...x6k");

    4. Please add this pubkey to the Anchor.toml line 4: staking = "5N...x6k"

    5. Please add this pubkey to the types.ts line 6: export const BURNING_PROGRAM_ID = new PublicKey("5N...x6k");

    6. anchor build again

    7. solana program deploy /home/.../backend/target/deploy/burning.so

    Then, you can enjoy this program 🎭


    How to use?

    A Project Owner

    First of all, open the directory and yarn

    Initproject

    You can change you wallet address in the package.json. And de-comment the following statements

        await initProject();
    
        // await register(new PublicKey("2ZQ..WLk"), [2, 2], [0,0], [2,1]);
        // await burnToGet(
        //     new PublicKey("2ZQ...WLk"),
        //     [
        //          new PublicKey("5bd...C5P"),
        //          new PublicKey("8uT...5Gk"),
        //          new PublicKey("HG3...dyw"),
        //      ]
        // )

    Then, in the /burning folder

    yarn ts-node

    Register

    You can send the NFT which will be listed in the marketplace.

    Style Set:
    Random      => 0
    "Black"     => 1
    "Silver"    => 2
    "Gold"      => 3
    "Diamond"   => 4
    
    Artist Set:
    Random          => 0
    "Big Pun"       => 1 
    "The Game"      => 2
    "April Walker"  => 3
    "Drink Champs"  => 4
    "Onyx"          => 5
    
    eg:
    // if users should burn Silver -> 2, Gold/Onyx -> 3, Diamond -> 1
    style   = [2, 3, 4]
    artist  = [0, 5, 0]
    amount  = [2, 3, 1]
        // await initProject();
    
        await register(new PublicKey("2ZQ..WLk"), [2, 3, 4], [0, 5, 0], [2, 3, 1]);
        // await burnToGet(
        //     new PublicKey("2ZQ...WLk"),
        //     [
        //          new PublicKey("5bd...C5P"),
        //          new PublicKey("8uT...5Gk"),
        //          new PublicKey("HG3...dyw"),
        //      ]
        // )

    Then, in the /burning folder

    yarn ts-node

    BurnToGet

    Users can burn NFTs to get the NFT. Users should burn total amount of NFTs.(eg. 2+3+1 = 6)

        // await initProject();
    
        // await register(new PublicKey("2ZQ..WLk"), [2, 3, 4], [0, 5, 0], [2, 3, 1]);
        await burnToGet(
            new PublicKey("2ZQ...WLk"),
            [
                 new PublicKey("5bd...C5P"),
                 new PublicKey("8uT...5Gk"),
                 new PublicKey("HG3...dyw"),
                 new PublicKey("54d...C5P"),
                 new PublicKey("1uT...5Gk"),
                 new PublicKey("kG3...dyw"),
             ]
        )

    Then, in the /burning folder

    yarn ts-node
    Visit original content creator repository https://github.com/vvizardev/burn-to-get-nft-sc
  • tags

    Laravel Logo

    Build Status Total Downloads Latest Stable Version License

    Aplications tags

    The application runs at the following url: http://www.tags.test/

    About Laravel

    Laravel is a web application framework with expressive, elegant syntax. We believe development must be an enjoyable and creative experience to be truly fulfilling. Laravel takes the pain out of development by easing common tasks used in many web projects, such as:

    Laravel is accessible, powerful, and provides tools required for large, robust applications.

    Learning Laravel

    Laravel has the most extensive and thorough documentation and video tutorial library of all modern web application frameworks, making it a breeze to get started with the framework.

    You may also try the Laravel Bootcamp, where you will be guided through building a modern Laravel application from scratch.

    If you don’t feel like reading, Laracasts can help. Laracasts contains over 2000 video tutorials on a range of topics including Laravel, modern PHP, unit testing, and JavaScript. Boost your skills by digging into our comprehensive video library.

    Laravel Sponsors

    We would like to extend our thanks to the following sponsors for funding Laravel development. If you are interested in becoming a sponsor, please visit the Laravel Patreon page.

    Premium Partners

    Contributing

    Thank you for considering contributing to the Laravel framework! The contribution guide can be found in the Laravel documentation.

    Code of Conduct

    In order to ensure that the Laravel community is welcoming to all, please review and abide by the Code of Conduct.

    Security Vulnerabilities

    If you discover a security vulnerability within Laravel, please send an e-mail to Taylor Otwell via taylor@laravel.com. All security vulnerabilities will be promptly addressed.

    License

    The Laravel framework is open-sourced software licensed under the MIT license.

    Visit original content creator repository https://github.com/silviaherguedas/tags
  • pong

    The Pong game with Reinforcement Learning AI Agent

    screen_shot

    Getting started

    The reinforcement learning Pong game demo written in javascript, which runs in web browser.

    Description

    Two AI agents play the game. You only can be act as an audience.

    If you are a programmer:

    1. install VSCode and Live Server extension
    2. open the index.html file with Live Server extension

    If you are not a programmer: deploy the project as an app in any one HTTP server

    Actions

    Pong has the action space of 2 with the table below listing the meaning of each action’s meanings

    Value Meaning
    0 move up
    1 move down

    States

    Pong’s state is a tuple with 5 items. the table below lists the meaning of each item meanings

    index Meaning min value max value
    0 the ball x coordinate 0.0 1.0
    1 the ball y coordinate 0.0 1.0
    2 the ball x velocity 0.5 0.1
    3 the ball y velocity -0.2 0.2
    4 the paddle y position 0.0 1.0

    the x positive direction is to the right the y positive direction is to the up

    Rewards

    You get the reward score when the ball pass the paddle or collide with the paddle.

    reward = math.log(abs(paddle_pos - ball_position.y) / area_height + 0.000001)
    
    • paddle_pos is the paddle center y position
    • ball_position.y is the ball center y position
    • area_height is the game area height

    How to train the model

    Please refer to the training README.mdfor training details. How to train

    Screen Shots

    1. the training screen shot screen_shot

    2. the game screen shot screen_shot

    Visit original content creator repository https://github.com/lijian736/pong
  • msg_reply

    Smart Message Reply

    Have you ever seen or used Google Smart Reply? It’s a service that provides automatic reply suggestions for user messages. See below.

    This is a useful application of the retrieval based chatbot. Think about it. How many times do we text a message like thx, hey, or see you later? In this project, we build a simple message reply suggestion system.

    Kyubyong Park
    Code-review by Yj Choe

    Synonym group

    • We need to set the list of suggestions to show. Naturally, frequency is considered first. But what about those phrases that are similar in meaning? For example, should thank you so much and thxbe treated independently? We don’t think so. We want to group them and save our slots. How? We make use of a parallel corpus. Both thank you so much and thx are likely to be translated into the same text. Based on this assumption, we construct English synonym groups that share the same translation.

    Model

    We fine-tune huggingface’s the Bert pretrained model for sequence classification. In it, a special starting token [CLS] stores the entire information of a sentence. Extra layers are attached to project the condensed information to classification units (here 100).

    Data

    • We use OpenSubtitles 2018 Spanish-English parallel corpus to construct synonym groups. OpenSubtitles is a large collection of translated movie subtitles. The en-es data consists of more than 61M aligned lines.
    • Ideally, a (very) large dialog corpus is needed for training, which we failed to find. We use the Cornell Movie Dialogue Corpus, instead. It’s composed of 83,097 dialogues or 304,713 lines.

    Requirements

    • python>=3.6
    • tqdm>=4.30.0
    • pytorch>=1.0
    • pytorch_pretrained_bert>=0.6.1
    • nltk>=3.4

    Training

    • STEP 0. Download OpenSubtitles 2018 Spanish-English Parallel data.
    bash download.sh
    
    • STEP 1. Construct synonym groups from the corpus.
    python construct_sg.py
    
    • STEP 2. Make phr2sg_id and sg_id2phr dictionaries.
    python make_phr2sg_id.py
    
    • STEP 3. Convert a monolingual English text to ids.
    python encode.py
    
    • STEP 4. Create training data and save them as pickle.
    python prepro.py
    
    • STEP 5. Train.
    python train.py
    

    Test (Demo)

    python test.py --ckpt log/9500_ACC0.1.pt
    

    Notes

    • Training loss slowly but steadily decreases.
    • Accuracy@5 on the evaluation data is from 10 to 20 percent.
    • For real application, a much much larger corpus is needed.
    • Not sure how much movie scripts are similar to message dialogues.
    • A better strategy for constructing synonym groups is necessary.
    • A retrieval-based chatbot is a realistic application as it is safter and easier than generation-based one.
    Visit original content creator repository https://github.com/Kyubyong/msg_reply
  • Beta-modelling

    Beta modelling

    A jupyter notebook that simplifies beta modelling of X-ray images of eliptical galaxies. The notebook combines Ipywidgets with the Sherpa 4.14 package and provides a simple graphical interface for fitting the 2D surface brightness distribution of astronomical X-ray images. The notebook offers a set of models (simple or complex β-model, Caveliere & Frusco-Femiano 1978) whose parameters can be easily adjusted, frozen or tied using clickable sliders and checkboxes.

    Usage

    The beta_fitting.ipynb notebook can be used simply by running it in the Jupyter Notebook or JupyterLab platform using Python environment with all required libraries (stated in Requirements). Alternatively, the notebook can be run using Voilà package:

    $ voila beta_fitting.ipynb

    which autoruns the whole notebook and displays the cell outputs in a new browser tab in a cleaner way than classical Jupyter notebook.

    The notebook automaticaly finds all fits files in current directory and lists them in the Galaxy: dropdown selection menu. When the galaxy image is loaded, one can pick a size scale of the fitted part of the image and also choose between various types of models (single or double beta model etc.) A given model is described by a set of parameters that can be adjusted, freezed, or tied with others using sliders and checkboxes.

    Whenever an optimal model and set of parameters is chosen by the user, it can be fitted using the Fit button. The fitted parameters can be saved into a text file using the Save to txt button and the residual image is saved by the Save residual button. Altarnatively, the user can run an MCMC simulation (Run MCMC button) to properly estimate the uncertainties of the fitted parameters and also correlations between them. One can set the length and burn length of the MCMC chain as well as the fit statistics (chi2, cstat, etc.). After the chain is finished and saved into a FITS file, the distributions of fitted parameters are plotted in a corner plot and saved into a PDF.

    The output window in the bottom right shows radial profiles of both the data and the model (individual model components are displayed separately), original image, model image, and also a residual image obtained by substracting the model from the original image.

    Note: the notebook runs smoother than the gif shows :)

    Requirements

    Python libraries:

    astropy
    corner
    ipywidgets
    matplotlib
    numpy
    pandas
    scipy
    sherpa
    voila (optional – for clean output in new tab)

    Data:

    Processed X-ray (Chandra, XMM-Newton) image of eliptical galaxy
    – background subtracted & exposure corrected if possible
    – cropped and centered at the center of the galaxy
    – excluded & filled point sources

    Example

    The github repository includes three exemplary X-ray images of elliptical galaxies (NGC4649, NGC4778, NGC5813) observed by Chandra X-ray observatory. Observations of all objects were processed by classical CIAO procedures, background-subtracted & exposure-corrected and removed for point sources using dmfilth routine (Note: for filling point sources using dmfilth the images were multiplied by lowest pixel value in order for the Poisson statistics to work properly).

    Todo

    • add sersic and other profiles
    • add image preprocessing functionalities (finding and filling point sources)
    • add other methods (unsharp masking, GGM)
    • implement Arithmetic with X-ray images (Churazov et al. 2016)
    • implement CADET predictions
    • add cavity selection + significance estimation
    Visit original content creator repository https://github.com/tomasplsek/Beta-modelling
  • nginx-stigready-baseline

    nginx-stigready-baseline

    InSpec Profile to validate the secure configuration of nginx-stigready-baseline, against Web Server SRG Verson 2 Release 3 InSpec profile for nginx 1.19

    Getting Started

    It is intended and recommended that InSpec run this profile from a “runner” host (such as a DevOps orchestration server, an administrative management system, or a developer’s workstation/laptop) against the target remotely over ssh.

    The latest versions and installation options are available at the InSpec site.

    Running This Baseline Directly from Github

    # How to run
    inspec exec https://github.com/mitre/nginx-stigready-baseline/archive/master.tar.gz -t ssh:// --input-file=<path_to_your_inputs_file/name_of_your_inputs_file.yml> --reporter=cli json:<path_to_your_output_file/name_of_your_output_file.json>
    

    Different Run Options

    Full exec options

    Running This Baseline from a local Archive copy

    If your runner is not always expected to have direct access to GitHub, use the following steps to create an archive bundle of this baseline and all of its dependent tests:

    (Git is required to clone the InSpec profile using the instructions below. Git can be downloaded from the Git site.)

    When the “runner” host uses this profile baseline for the first time, follow these steps:

    mkdir profiles
    cd profiles
    git clone https://github.com/mitre/nginx-stigready-baseline
    inspec archive nginx-stigready-baseline
    inspec exec <name of generated archive> -t ssh:// --input-file=<path_to_your_inputs_file/name_of_your_inputs_file.yml> --reporter=cli json:<path_to_your_output_file/name_of_your_output_file.json>
    

    For every successive run, follow these steps to always have the latest version of this baseline:

    cd nginx-stigready-baseline
    git pull
    cd ..
    inspec archive nginx-stigready-baseline --overwrite
    inspec exec <name of generated archive> -t ssh:// --input-file=<path_to_your_inputs_file/name_of_your_inputs_file.yml> --reporter=cli json:<path_to_your_output_file/name_of_your_output_file.json>
    

    Viewing the JSON Results

    The JSON results output file can be loaded into heimdall-lite for a user-interactive, graphical view of the InSpec results.

    The JSON InSpec results file may also be loaded into a full heimdall server, allowing for additional functionality such as to store and compare multiple profile runs.

    Testing with Kitchen

    Dependencies

    Setup Environment

    1. Clone the repo via git clone git@github.com:mitre/nginx-stigready-baseline.git
    2. cd to nginx-stigready-baseline
    3. Run gem install bundler
    4. Run bundle install
    5. Run export KITCHEN_YAML=kitchen.vagrant.yml – Docker and EC2 Kitchen Yaml files are available for testing

    Execute Tests

    1. Run bundle exec kitchen create – create host based on two suites, vanilla and hardened
    2. Run bundle exec kitchen list – you should see the following choices:
      • vanilla-ubuntu-1804
      • hardened-ubuntu-1804
    3. Run bundle exec kitchen converge
    4. Run bundle exec kitchen list – your should see your hosts with status “converged”
    5. Run bundle exec kitchen verify – Once finished, the results should be in the ‘results’ directory.

    Authors

    • Timothy J Miller
    • The MITRE InSpec Team

    Special Thanks

    Contributing and Getting Help

    To report a bug or feature request, please open an issue.

    NOTICE

    © 2018-2020 The MITRE Corporation.

    Approved for Public Release; Distribution Unlimited. Case Number 18-3678.

    NOTICE

    MITRE hereby grants express written permission to use, reproduce, distribute, modify, and otherwise leverage this software to the extent permitted by the licensed terms provided in the LICENSE.md file included with this project.

    NOTICE

    This software was produced for the U. S. Government under Contract Number HHSM-500-2012-00008I, and is subject to Federal Acquisition Regulation Clause 52.227-14, Rights in Data-General.

    No other use other than that granted to the U. S. Government, or to those acting on behalf of the U. S. Government under that Clause is authorized without the express written permission of The MITRE Corporation.

    For further information, please contact The MITRE Corporation, Contracts Management Office, 7515 Colshire Drive, McLean, VA 22102-7539, (703) 983-6000.

    NOTICE

    DISA STIGs are published by DISA IASE, see: https://iase.disa.mil/Pages/privacy_policy.aspx

    Visit original content creator repository
    https://github.com/mitre/nginx-stigready-baseline