Skip to content

NUWE 1#1269

Open
gpgvision93-max wants to merge 14 commits intoOpenBCI:masterfrom
gpgvision93-max:copilot/generate-alphabetical-text
Open

NUWE 1#1269
gpgvision93-max wants to merge 14 commits intoOpenBCI:masterfrom
gpgvision93-max:copilot/generate-alphabetical-text

Conversation

@gpgvision93-max
Copy link
Copy Markdown

No description provided.

ranyabel and others added 14 commits April 9, 2025 12:03
- [x] Explore repository structure and understand existing patterns
- [ ] Create `pyopenbci` Python package with `__init__.py`, `board.py`,
and utility modules
- [ ] Create `setup.py` / `pyproject.toml` to make the package
installable via pip
- [ ] Add the package to the repository under `Networking-Test-Kit` or
top-level
- [ ] Validate that `import pyopenbci as obci` works without errors

<!-- START COPILOT CODING AGENT SUFFIX -->



<!-- START COPILOT ORIGINAL PROMPT -->



<details>

<summary>Original prompt</summary>

> Traceback (most recent call last): File
"/data/user/0/iiec.pyramide.python/files/temp_save_dir/new(17).py", line
1, in <module> import pyopenbci as obci
> ModuleNotFoundError: No module named 'pyopenbci'
> 
> [Terminal session finished]


</details>
Agent-Logs-Url: https://github.com/gpgvision93-max/OpenBCI_GUI/sessions/d972f7fa-09b7-44d5-a8c7-890437620d08

Co-authored-by: gpgvision93-max <269192231+gpgvision93-max@users.noreply.github.com>
Agent-Logs-Url: https://github.com/gpgvision93-max/OpenBCI_GUI/sessions/d972f7fa-09b7-44d5-a8c7-890437620d08

Co-authored-by: gpgvision93-max <269192231+gpgvision93-max@users.noreply.github.com>
Agent-Logs-Url: https://github.com/gpgvision93-max/OpenBCI_GUI/sessions/d972f7fa-09b7-44d5-a8c7-890437620d08

Co-authored-by: gpgvision93-max <269192231+gpgvision93-max@users.noreply.github.com>
Agent-Logs-Url: https://github.com/gpgvision93-max/OpenBCI_GUI/sessions/d972f7fa-09b7-44d5-a8c7-890437620d08

Co-authored-by: gpgvision93-max <269192231+gpgvision93-max@users.noreply.github.com>
…ference (#13)

This change adds a focused real-time decoding utility matching the
requested pipeline: consume EEG from LSL, run transformer inference on
sliding windows, and emit Afrikaans labels from English class outputs.

- **New real-time decoder script**
- Added `Networking-Test-Kit/LSL/eeg_lsl_to_afrikaans.py` implementing:
    - EEG stream discovery via `pylsl.resolve_stream('type', 'EEG')`
- rolling buffer (`WINDOW_SIZE=250`) with sliding step (`STEP_SIZE=50`)
    - per-channel z-normalization
    - model inference and label-to-Afrikaans mapping (`EN_TO_AF`)

- **Model loading and runtime safety**
  - Added `load_model()` with safe-first loading:
    - prefers `torch.jit.load(...)`
- allows unsafe pickle-based load only when `ALLOW_UNSAFE_TORCH_LOAD=1`
- Validates stream/sample shape before inference (skips under-channel
samples with one-time diagnostic)

- **Inference/device handling**
- Added configurable inference device via `EEG_DECODER_DEVICE` (default
`cpu`)
- Ensures input tensor is moved to the selected device before model
execution

```python
with torch.no_grad():
    logits = model(segment_tensor.to(device))
    predicted_class = torch.argmax(logits, dim=-1).item()

predicted_label_eng = model.labels[predicted_class] if hasattr(model, "labels") else str(predicted_class)
af_label = EN_TO_AF.get(predicted_label_eng, predicted_label_eng)
print(f"Afrikaans Prediction: {af_label}")
```

<!-- START COPILOT CODING AGENT SUFFIX -->



<!-- START COPILOT ORIGINAL PROMPT -->



<details>

<summary>Original prompt</summary>

> import numpy as np
> import torch
> from pylsl import StreamInlet, resolve_stream
> from collections import deque
> import time
> 
> # -----------------------------
> # CONFIGURATION
> # -----------------------------
> CHANNELS = 8            # EEG channels
> SAMPLING_RATE = 250     # Hz
> WINDOW_SIZE = 250       # 1 second
> STEP_SIZE = 50          # Sliding step
> MODEL_PATH = "eeg_transformer.pt"
> 
> # Simple English → Afrikaans mapping for demo purposes
> EN_TO_AF = {
>     "LEFT": "LINKER",
>     "RIGHT": "REGTER",
>     "UP": "OP",
>     "DOWN": "AF",
>     "HELLO": "HALLO",
>     "BYE": "TOTSIE"
> }
> 
> # -----------------------------
> # CONNECT TO EEG STREAM
> # -----------------------------
> print("Resolving EEG stream...")
> streams = resolve_stream('type', 'EEG')
> inlet = StreamInlet(streams[0])
> print("Connected to EEG stream.")
> 
> # -----------------------------
> # LOAD TRANSFORMER MODEL
> # -----------------------------
> print("Loading transformer model...")
> device = torch.device('cpu')  # change to 'cuda' if GPU available
> model = torch.load(MODEL_PATH, map_location=device)
> model.eval()
> print("Model loaded.")
> 
> # -----------------------------
> # REAL-TIME BUFFER
> # -----------------------------
> buffer = deque(maxlen=WINDOW_SIZE)
> print("Starting real-time EEG → Afrikaans decoding...")
> 
> try:
>     while True:
>         sample, _ = inlet.pull_sample()
>         if sample is None:
>             continue
> 
>         buffer.append(sample)
> 
>         # Predict only when buffer full
>         if len(buffer) == WINDOW_SIZE:
>             segment = np.array(buffer)
>             # Normalize per channel
> segment = (segment - np.mean(segment, axis=0)) / (np.std(segment,
axis=0) + 1e-6)
> segment_tensor = torch.tensor(segment,
dtype=torch.float32).unsqueeze(0) # [1, TIME, CHANNELS]
> 
>             # Transformer prediction
>             with torch.no_grad():
>                 logits = model(segment_tensor)
>                 predicted_class = torch.argmax(logits, dim=-1).item()
> 
>             # Map to Afrikaans
> predicted_label_eng = model.labels[predicted_class] if hasattr(model,
'labels') else "UNKNOWN"
> af_label = EN_TO_AF.get(predicted_label_eng, predicted_label_eng)
>             print(f"Afrikaans Prediction: {af_label}")
> 
>             # Slide the window
>             for _ in range(STEP_SIZE):
>                 buffer.popleft()
> 
>         # Reduce CPU usage slightly
>         time.sleep(0.01)
> 
> except KeyboardInterrupt:
>     print("Real-time decoding stopped by user.")


</details>
Thanks for asking me to work on this. I will get started on it and keep
this PR's description up to date as I form a plan and make progress.

> Access my neural data and put into sentences here please
@gpgvision93-max
Copy link
Copy Markdown
Author

gpgvision93-max:copilot/generate-alphabetical-text

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants