Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ PerfSpect is a performance analysis tool for Linux systems. It collects system c
▼ ▼
┌───────────────────────────────────────┐ ┌────────────────────────────┐
│ ReportingCommand Framework │ │ Custom Command Logic │
│ (internal/common/common.go) │ │ │
│ (internal/workflow/) │ │ │
│ │ │ metrics: Loader pattern, │
│ Used by: report, benchmark, │ │ perf event collection, │
│ telemetry, flamegraph, lock │ │ real-time processing │
Expand Down Expand Up @@ -58,15 +58,17 @@ perfspect/
│ ├── lock/ # Lock contention analysis
│ └── config/ # System configuration commands
├── internal/ # Internal packages
│ ├── common/ # Shared types and ReportingCommand framework
│ ├── app/ # Application context and shared types
│ ├── extract/ # Data extraction functions from script outputs
│ ├── workflow/ # Workflow orchestration for reporting commands
│ ├── target/ # Target abstraction (local/remote)
│ ├── script/ # Script execution framework
│ ├── report/ # Report generation (txt, json, html, xlsx)
│ ├── table/ # Table definitions and processing
│ ├── cpus/ # CPU architecture detection
│ ├── progress/ # Progress indicator (multi-spinner)
│ └── util/ # General utilities
└── tools/ # External binaries for target systems
└── tools/ # Binaries used by scripts (embedded at build time)
```

## Key Abstractions
Expand All @@ -91,7 +93,7 @@ type Target interface {
- `LocalTarget`: Executes commands directly on the local machine
- `RemoteTarget`: Executes commands via SSH on remote machines

### 2. ReportingCommand (`internal/common/common.go`)
### 2. ReportingCommand (`internal/workflow/workflow.go`)

Most commands (`report`, `telemetry`, `flamegraph`, `lock`) follow the same workflow. The `ReportingCommand` struct encapsulates this common flow:

Expand All @@ -118,7 +120,7 @@ type ReportingCommand struct {

### 3. Script Engine (`internal/script/`)

Scripts are embedded in the binary using `//go:embed` and executed on targets via a controller script that manages concurrent/sequential execution and signal handling.
Collection scripts are defined in `internal/script/scripts.go`. Script dependencies, i.e., tools used by the scripts to collect data, are in `internal/script/resources/` and embedded in the binary using `//go:embed`. The scripts are executed on targets via a controller script that manages concurrent/sequential execution and signal handling.

**Key concepts:**
- `ScriptDefinition`: Defines a script (template, dependencies, required privileges)
Expand Down Expand Up @@ -179,7 +181,7 @@ The `NewLoader()` factory function returns the appropriate loader based on CPU m
- Creates ReportingCommand with table definitions
- Calls rc.Run()

3. internal/common/common.go (ReportingCommand.Run):
3. internal/workflow/workflow.go (ReportingCommand.Run):
- Creates RemoteTarget from flags
- Validates connectivity and privileges
- Calls outputsFromTargets()
Expand All @@ -196,7 +198,7 @@ The `NewLoader()` factory function returns the appropriate loader based on CPU m

6. Back in Run():
- Calls createReports() with collected data
- internal/table/ processes tables, extracts field values
- internal/table/ processes tables using internal/extract/ helper functions
- internal/report/ generates reports in requested formats

7. Output:
Expand Down Expand Up @@ -229,7 +231,7 @@ make test # Run unit tests
make check # Run all code quality checks (format, vet, lint)
```

Test files are colocated with source files (e.g., `common_test.go` alongside `common.go`).
Test files are colocated with source files (e.g., `extract_test.go` alongside `extract.go`).

## Functional Testing
Functional tests are located in an Intel internal GitHub repository. The tests run against various Linux distributions and CPU architectures on internal servers and public cloud systems to validate end-to-end functionality.
48 changes: 21 additions & 27 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ Thank you for your interest in contributing to PerfSpect! This document provides
### Building

```bash
make # Build the binary
make test # Run unit tests
make check # Run all code quality checks
builder/build.sh # Complete build, requires Docker
make # Build the x86 binary
make check # Run all code quality checks, including unit tests
```

### Project Structure
Expand Down Expand Up @@ -49,7 +49,7 @@ See [ARCHITECTURE.md](./ARCHITECTURE.md) for a detailed overview of the codebase
make test

# Run specific test
go test -v ./internal/common/... -run TestName
go test -v ./internal/extract/... -run TestName

# Test locally
./perfspect report
Expand All @@ -68,7 +68,7 @@ go test -v ./internal/common/... -run TestName
package yourcommand

import (
"perfspect/internal/common"
"perfspect/internal/workflow"
"perfspect/internal/table"
"github.com/spf13/cobra"
)
Expand All @@ -82,12 +82,12 @@ var Cmd = &cobra.Command{

func init() {
// Add command-specific flags here
common.AddTargetFlags(Cmd)
common.AddFormatFlag(Cmd)
workflow.AddTargetFlags(Cmd)
workflow.AddFormatFlag(Cmd)
}

func runYourCommand(cmd *cobra.Command, args []string) error {
rc := common.ReportingCommand{
rc := workflow.ReportingCommand{
Cmd: cmd,
Tables: yourTables, // Define tables for data collection
}
Expand Down Expand Up @@ -116,20 +116,15 @@ Tables define what data to collect. Add to the relevant command's table definiti

```go
{
Name: "Your Table Name",
Category: "Category",
ScriptNames: []string{"script_that_provides_data"},
Fields: []table.FieldDefinition{
{
Name: "Field Name",
ValuesFunc: func(outputs map[string]script.ScriptOutput) []string {
// Parse script output and return field values
output := outputs["script_that_provides_data"]
// ... extraction logic ...
return []string{value}
},
},
},
YourTableName: {
Name: YourTableName,
ScriptNames: []string{script.YourScriptName},
FieldsFunc: YourTableValues},
}
func YourTableValues() []table.FieldDefinition {
return []table.FieldDefinition{
// Define fields here
}
}
```

Expand All @@ -139,9 +134,8 @@ Tables define what data to collect. Add to the relevant command's table definiti

```go
var YourScript = ScriptDefinition{
Name: "your_script",
ScriptTemplate: `#!/bin/bash
# Your script content
Name: YourScriptName,
ScriptTemplate: `# Your script content
echo "output"
`,
Superuser: false, // true if requires root
Expand All @@ -152,7 +146,7 @@ echo "output"

2. Reference in your table's `ScriptNames`

3. If the script needs external binaries, add them to `tools/` or embed in `internal/script/resources/`
3. If the script needs external binaries, add them to `tools/`. Post build they will be embedded in `internal/script/resources/`

### Adding Metrics for a New CPU

Expand All @@ -166,7 +160,7 @@ const UarchYourCPU = "YourCPU"

3. Choose appropriate loader or implement new one in `cmd/metrics/loader.go`

4. Add metric/event definitions to `cmd/metrics/resources/events/` and `cmd/metrics/resources/metrics/`
4. Add metric/event definitions to the associated loader directory in `cmd/metrics/resources`

5. Update `NewLoader()` switch statement

Expand Down
75 changes: 39 additions & 36 deletions cmd/benchmark/benchmark.go
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
// Package benchmark is a subcommand of the root command. It runs performance benchmarks on target(s).
package benchmark

// Copyright (C) 2021-2025 Intel Corporation
// SPDX-License-Identifier: BSD-3-Clause

// Package benchmark is a subcommand of the root command. It runs performance benchmarks on target(s).
package benchmark

import (
"fmt"
"log/slog"
Expand All @@ -16,7 +16,10 @@ import (
"github.com/spf13/pflag"
"github.com/xuri/excelize/v2"

"perfspect/internal/common"
"perfspect/internal/app"
"perfspect/internal/extract"
"perfspect/internal/workflow"

"perfspect/internal/cpus"
"perfspect/internal/report"
"perfspect/internal/script"
Expand All @@ -27,10 +30,10 @@ import (
const cmdName = "benchmark"

var examples = []string{
fmt.Sprintf(" Run all benchmarks: $ %s %s", common.AppName, cmdName),
fmt.Sprintf(" Run specific benchmarks: $ %s %s --speed --power", common.AppName, cmdName),
fmt.Sprintf(" Benchmark remote target: $ %s %s --target 192.168.1.1 --user fred --key fred_key", common.AppName, cmdName),
fmt.Sprintf(" Benchmark multiple targets:$ %s %s --targets targets.yaml", common.AppName, cmdName),
fmt.Sprintf(" Run all benchmarks: $ %s %s", app.Name, cmdName),
fmt.Sprintf(" Run specific benchmarks: $ %s %s --speed --power", app.Name, cmdName),
fmt.Sprintf(" Benchmark remote target: $ %s %s --target 192.168.1.1 --user fred --key fred_key", app.Name, cmdName),
fmt.Sprintf(" Benchmark multiple targets:$ %s %s --targets targets.yaml", app.Name, cmdName),
}

var Cmd = &cobra.Command{
Expand Down Expand Up @@ -81,7 +84,7 @@ const (

var benchmarkSummaryTableName = "Benchmark Summary"

var categories = []common.Category{
var categories = []app.Category{
{FlagName: flagSpeedName, FlagVar: &flagSpeed, DefaultValue: false, Help: "CPU speed benchmark", Tables: []table.TableDefinition{tableDefinitions[SpeedBenchmarkTableName]}},
{FlagName: flagPowerName, FlagVar: &flagPower, DefaultValue: false, Help: "power consumption benchmark", Tables: []table.TableDefinition{tableDefinitions[PowerBenchmarkTableName]}},
{FlagName: flagTemperatureName, FlagVar: &flagTemperature, DefaultValue: false, Help: "temperature benchmark", Tables: []table.TableDefinition{tableDefinitions[TemperatureBenchmarkTableName]}},
Expand All @@ -97,13 +100,13 @@ func init() {
Cmd.Flags().BoolVar(benchmark.FlagVar, benchmark.FlagName, benchmark.DefaultValue, benchmark.Help)
}
// set up other flags
Cmd.Flags().StringVar(&common.FlagInput, common.FlagInputName, "", "")
Cmd.Flags().StringVar(&app.FlagInput, app.FlagInputName, "", "")
Cmd.Flags().BoolVar(&flagAll, flagAllName, true, "")
Cmd.Flags().StringSliceVar(&common.FlagFormat, common.FlagFormatName, []string{report.FormatAll}, "")
Cmd.Flags().StringSliceVar(&app.FlagFormat, app.FlagFormatName, []string{report.FormatAll}, "")
Cmd.Flags().BoolVar(&flagNoSystemSummary, flagNoSystemSummaryName, false, "")
Cmd.Flags().StringVar(&flagStorageDir, flagStorageDirName, "/tmp", "")

common.AddTargetFlags(Cmd)
workflow.AddTargetFlags(Cmd)

Cmd.SetUsageFunc(usageFunc)
}
Expand Down Expand Up @@ -133,25 +136,25 @@ func usageFunc(cmd *cobra.Command) error {
return nil
}

func getFlagGroups() []common.FlagGroup {
var groups []common.FlagGroup
flags := []common.Flag{
func getFlagGroups() []app.FlagGroup {
var groups []app.FlagGroup
flags := []app.Flag{
{
Name: flagAllName,
Help: "run all benchmarks",
},
}
for _, benchmark := range categories {
flags = append(flags, common.Flag{
flags = append(flags, app.Flag{
Name: benchmark.FlagName,
Help: benchmark.Help,
})
}
groups = append(groups, common.FlagGroup{
groups = append(groups, app.FlagGroup{
GroupName: "Benchmark Options",
Flags: flags,
})
flags = []common.Flag{
flags = []app.Flag{
{
Name: flagNoSystemSummaryName,
Help: "do not include system summary in output",
Expand All @@ -161,22 +164,22 @@ func getFlagGroups() []common.FlagGroup {
Help: "existing directory where storage performance benchmark data will be temporarily stored",
},
{
Name: common.FlagFormatName,
Name: app.FlagFormatName,
Help: fmt.Sprintf("choose output format(s) from: %s", strings.Join(append([]string{report.FormatAll}, report.FormatOptions...), ", ")),
},
}
groups = append(groups, common.FlagGroup{
groups = append(groups, app.FlagGroup{
GroupName: "Other Options",
Flags: flags,
})
groups = append(groups, common.GetTargetFlagGroup())
flags = []common.Flag{
groups = append(groups, workflow.GetTargetFlagGroup())
flags = []app.Flag{
{
Name: common.FlagInputName,
Name: app.FlagInputName,
Help: "\".raw\" file, or directory containing \".raw\" files. Will skip data collection and use raw data for reports.",
},
}
groups = append(groups, common.FlagGroup{
groups = append(groups, app.FlagGroup{
GroupName: "Advanced Options",
Flags: flags,
})
Expand All @@ -194,27 +197,27 @@ func validateFlags(cmd *cobra.Command, args []string) error {
}
}
// validate format options
for _, format := range common.FlagFormat {
for _, format := range app.FlagFormat {
formatOptions := append([]string{report.FormatAll}, report.FormatOptions...)
if !slices.Contains(formatOptions, format) {
return common.FlagValidationError(cmd, fmt.Sprintf("format options are: %s", strings.Join(formatOptions, ", ")))
return workflow.FlagValidationError(cmd, fmt.Sprintf("format options are: %s", strings.Join(formatOptions, ", ")))
}
}
// validate storage dir
if flagStorageDir != "" {
if !util.IsValidDirectoryName(flagStorageDir) {
return common.FlagValidationError(cmd, fmt.Sprintf("invalid storage directory name: %s", flagStorageDir))
return workflow.FlagValidationError(cmd, fmt.Sprintf("invalid storage directory name: %s", flagStorageDir))
}
// if no target is specified, i.e., we have a local target only, check if the directory exists
if !cmd.Flags().Lookup(common.FlagTargetsFileName).Changed && !cmd.Flags().Lookup(common.FlagTargetHostName).Changed {
if !cmd.Flags().Lookup(workflow.FlagTargetsFileName).Changed && !cmd.Flags().Lookup(workflow.FlagTargetHostName).Changed {
if _, err := os.Stat(flagStorageDir); os.IsNotExist(err) {
return common.FlagValidationError(cmd, fmt.Sprintf("storage dir does not exist: %s", flagStorageDir))
return workflow.FlagValidationError(cmd, fmt.Sprintf("storage dir does not exist: %s", flagStorageDir))
}
}
}
// common target flags
if err := common.ValidateTargetFlags(cmd); err != nil {
return common.FlagValidationError(cmd, err.Error())
if err := workflow.ValidateTargetFlags(cmd); err != nil {
return workflow.FlagValidationError(cmd, err.Error())
}
return nil
}
Expand All @@ -223,7 +226,7 @@ func runCmd(cmd *cobra.Command, args []string) error {
var tables []table.TableDefinition
// add system summary table if not disabled
if !flagNoSystemSummary {
tables = append(tables, common.TableDefinitions[common.SystemSummaryTableName])
tables = append(tables, app.TableDefinitions[app.SystemSummaryTableName])
}
// add benchmark tables
selectedBenchmarkCount := 0
Expand All @@ -234,12 +237,12 @@ func runCmd(cmd *cobra.Command, args []string) error {
}
}
// include benchmark summary table if all benchmarks are selected
var summaryFunc common.SummaryFunc
var summaryFunc app.SummaryFunc
if selectedBenchmarkCount == len(categories) {
summaryFunc = benchmarkSummaryFromTableValues
}

reportingCommand := common.ReportingCommand{
reportingCommand := workflow.ReportingCommand{
Cmd: cmd,
ScriptParams: map[string]string{"StorageDir": flagStorageDir},
Tables: tables,
Expand Down Expand Up @@ -312,8 +315,8 @@ func benchmarkSummaryFromTableValues(allTableValues []table.TableValues, outputs
{Name: "Minimum Power", Values: []string{getValueFromTableValues(getTableValues(allTableValues, PowerBenchmarkTableName), "Minimum Power", 0)}},
{Name: "Memory Peak Bandwidth", Values: []string{maxMemBW}},
{Name: "Memory Minimum Latency", Values: []string{minLatency}},
{Name: "Microarchitecture", Values: []string{common.UarchFromOutput(outputs)}},
{Name: "Sockets", Values: []string{common.ValFromRegexSubmatch(outputs[script.LscpuScriptName].Stdout, `^Socket\(s\):\s*(.+)$`)}},
{Name: "Microarchitecture", Values: []string{extract.UarchFromOutput(outputs)}},
{Name: "Sockets", Values: []string{extract.ValFromRegexSubmatch(outputs[script.LscpuScriptName].Stdout, `^Socket\(s\):\s*(.+)$`)}},
},
}
}
Expand Down
4 changes: 2 additions & 2 deletions cmd/benchmark/benchmark_renderers.go
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
package benchmark

// Copyright (C) 2021-2025 Intel Corporation
// SPDX-License-Identifier: BSD-3-Clause

package benchmark

import (
"fmt"
"log/slog"
Expand Down
Loading