Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 14 additions & 13 deletions .github/workflows/integration_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ on:
workflow_dispatch:
inputs:
target:
description: 'Target (choose nightly to run like nightly tests)'
description: "Target (choose nightly to run like nightly tests)"
required: true
default: 'nightly'
default: "nightly"
type: choice
options:
- nightly
Expand All @@ -28,41 +28,42 @@ on:
- quantinuum
- scaleway
- tii
- qbraid
single_test_name:
type: string
required: false
description: 'Single test (e.g., targettests/quantinuum/load_value.cpp). Runs default tests if left blank'
description: "Single test (e.g., targettests/quantinuum/load_value.cpp). Runs default tests if left blank"
target_machine:
type: string
required: false
description: 'Target machine (e.g., H2-1E).'
description: "Target machine (e.g., H2-1E)."
cudaq_test_image:
type: string
required: false
default: '' # picked up from repo variable if not provided
description: 'CUDA Quantum image to run the tests in. Default to the latest CUDA Quantum nightly image'
default: "" # picked up from repo variable if not provided
description: "CUDA Quantum image to run the tests in. Default to the latest CUDA Quantum nightly image"
commit_sha:
type: string
required: false
description: 'Commit SHA to pull the code (examples/tests) for testing. Default to the commit associated with the CUDA Quantum docker image if left blank'
description: "Commit SHA to pull the code (examples/tests) for testing. Default to the commit associated with the CUDA Quantum docker image if left blank"
workflow_id:
type: string
required: false
description: 'Workflow Id to retrieve the Python wheel for testing. Default to the wheels produced by the Publishing workflow associated with the latest nightly CUDA Quantum Docker image if left blank'
description: "Workflow Id to retrieve the Python wheel for testing. Default to the wheels produced by the Publishing workflow associated with the latest nightly CUDA Quantum Docker image if left blank"
python_version:
type: choice
required: true
description: 'Python version to run wheel test'
description: "Python version to run wheel test"
options:
- '3.11'
- '3.12'
- '3.13'
- "3.11"
- "3.12"
- "3.13"

schedule:
- cron: 0 3 * * *

env:
python_version: '3.12'
python_version: "3.12"

jobs:
# Run a daily check of all links in the docs to find any newly broken links
Expand Down
49 changes: 49 additions & 0 deletions docs/sphinx/targets/cpp/qbraid.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
// Compile and run with:
// ```
// nvq++ --target qbraid qbraid.cpp -o out.x && ./out.x
// ```
// This will submit the job to the Qbraid ideal simulator target (default).


#include <cudaq.h>
#include <fstream>

// Define a simple quantum kernel to execute on Qbraid.
struct ghz {
// Maximally entangled state between 5 qubits.
auto operator()() __qpu__ {
cudaq::qvector q(5);
h(q[0]);
for (int i = 0; i < 4; i++) {
x<cudaq::ctrl>(q[i], q[i + 1]);
}
auto result = mz(q);
}
};

int main() {
// Submit to Qbraid asynchronously (e.g., continue executing
// code in the file until the job has been returned).
auto future = cudaq::sample_async(ghz{});
// ... classical code to execute in the meantime ...

// Can write the future to file:
{
std::ofstream out("saveMe.json");
out << future;
}

// Then come back and read it in later.
cudaq::async_result<cudaq::sample_result> readIn;
std::ifstream in("saveMe.json");
in >> readIn;

// Get the results of the read in future.
auto async_counts = readIn.get();
async_counts.dump();

// OR: Submit to Qbraid synchronously (e.g., wait for the job
// result to be returned before proceeding).
auto counts = cudaq::sample(ghz{});
counts.dump();
}
52 changes: 52 additions & 0 deletions docs/sphinx/targets/python/qbraid.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
import cudaq

# You only have to set the target once! No need to redefine it
# for every execution call on your kernel.
# To use different targets in the same file, you must update
# it via another call to `cudaq.set_target()`
cudaq.set_target("qbraid")


# Create the kernel we'd like to execute on Qbraid.
@cudaq.kernel
def kernel():
qvector = cudaq.qvector(2)
h(qvector[0])
x.ctrl(qvector[0], qvector[1])



# Execute on Qbraid and print out the results.

# Option A:
# By using the asynchronous `cudaq.sample_async`, the remaining
# classical code will be executed while the job is being handled
# by IonQ. This is ideal when submitting via a queue over
# the cloud.
async_results = cudaq.sample_async(kernel)
# ... more classical code to run ...

# We can either retrieve the results later in the program with
# ```
# async_counts = async_results.get()
# ```
# or we can also write the job reference (`async_results`) to
# a file and load it later or from a different process.
file = open("future.txt", "w")
file.write(str(async_results))
file.close()

# We can later read the file content and retrieve the job
# information and results.
same_file = open("future.txt", "r")
retrieved_async_results = cudaq.AsyncSampleResult(str(same_file.read()))

counts = retrieved_async_results.get()
print(counts)

# Option B:
# By using the synchronous `cudaq.sample`, the execution of
# any remaining classical code in the file will occur only
# after the job has been returned from Qbraid.
counts = cudaq.sample(kernel)
print(counts)
3 changes: 2 additions & 1 deletion docs/sphinx/using/backends/cloud.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ CUDA-Q provides a number of options to access hardware resources (GPUs and QPUs)

.. toctree::
:maxdepth: 1

Amazon Braket (braket) <cloud/braket.rst>
Scaleway QaaS (scaleway) <cloud/scaleway.rst>
Qbraid <cloud/qbraid.rst>
61 changes: 61 additions & 0 deletions docs/sphinx/using/backends/cloud/qbraid.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
QBRAID
+++++++

.. _qbraid-backend:

Setting Credentials
`````````````````````````

Programmers of CUDA-Q may access the `Qbraid Devices
<https://account.qbraid.com//>`__ from either C++ or Python. Generate
an API key from your `Qbraid account <https://account.qbraid.com//>`__ and export
it as an environment variable:

.. code:: bash

export QBRAID_API_KEY="qbraid_generated_api_key"


Submission from Python
`````````````````````````

First, set the :code:`qbraid` backend.

.. code:: python

cudaq.set_target('qbraid')

By default, quantum kernel code will be submitted to the IonQ simulator on qBraid.

To emulate the qbraid's simulator locally, without submitting through the cloud, you can also set the ``emulate`` flag to ``True``. This will emit any target specific compiler diagnostics.

.. code:: python

cudaq.set_target('qbraid', emulate=True)

The number of shots for a kernel execution can be set through the ``shots_count`` argument to ``cudaq.sample`` or ``cudaq.observe``. By default, the ``shots_count`` is set to 1000.

.. code:: python

cudaq.sample(kernel, shots_count=10000)

To see a complete example for using Qbraid's backends, take a look at our :doc:`Python examples <../../examples/examples>`.

Submission from C++
`````````````````````````
To target quantum kernel code for execution using qbraid,
pass the flag ``--target qbraid`` to the ``nvq++`` compiler.

.. code:: bash

nvq++ --target qbraid src.cpp

This will take the API key and handle all authentication with, and submission to, the Qbraid device. By default, quantum kernel code will be submitted to the Qbraidsimulator.

To emulate the qbraid's machine locally, without submitting through the cloud, you can also pass the ``--emulate`` flag to ``nvq++``. This will emit any target specific compiler diagnostics, before running a noise free emulation.

.. code:: bash

nvq++ --emulate --target qbraid src.cpp

To see a complete example for using IonQ's backends, take a look at our :doc:`C++ examples <../../examples/examples>`.
Loading