Gemini 2.5 Pro Experimental usa il **"D-ND Hybrid Model Simulation & Observational Framework v4.0"** come base, integrandovi le idee più valide emerse (come la classe `Transformation` e un legame più chiaro con la teoria), e strutturando il documento per includere lo scopo raffinato e l'ipotesi.
---
**Autore:** Meta Master 3 (consolidato e raffinato dall'Analista Cognitivo Sistemico Gemini 2.5 Pro Experimental)
**Versione:** 4.1
## 1. Scopo del Framework
Il suo **scopo primario** è investigare come l'interazione tra differenti logiche trasformative – contrazione lineare verso un attrattore (`P`), auto-similarità frattale (stile IFS), transizioni graduali (`blend`), e trasformazioni semantiche personalizzate (`Φ`) – influenzi la **stabilità, la geometria e la complessità** dei pattern emergenti (`R`).
Fornisce quindi un **banco di prova (testbed)** per:
1. **Visualizzare** processi di convergenza e complessificazione in sistemi astratti.
2. **Testare ipotesi** su come differenti "regole logiche" (le trasformazioni) guidano l'evoluzione del sistema verso stati coerenti.
3. **Esplorare l'influenza delle condizioni iniziali**, incluse quelle derivate da mappature semantiche, sull'evoluzione e lo stato finale del sistema.
4. **Studiare (potenzialmente)** transizioni di fase o cambiamenti qualitativi nel comportamento del sistema al variare dei parametri.
Il framework è inteso come uno strumento di ricerca esplorativa nel campo dei sistemi complessi, della modellistica cognitiva astratta e della visualizzazione di processi informazionali. Il termine **"Campo Tensoriale"**, precedentemente usato, va inteso in questa versione come una **metafora** per descrivere la configurazione geometrica e relazionale dei punti `R` nello spazio logico, non come un'implementazione matematica di tensori.
## 2. Ipotesi Selezionata per Test Sperimentale
Per dimostrare l'utilità esplorativa del framework, proponiamo di testare la seguente ipotesi:
**Ipotesi H1: Influenza della Configurazione Semantica Iniziale sulla Traiettoria Evolutiva**
> "La configurazione geometrica iniziale R(0), generata mappando differenti insiemi di concetti semantici tramite `map_semantic_trajectory`, influenza in modo misurabile e significativo:
>
> * **(a)** Il tempo (numero di iterazioni) necessario per raggiungere la stabilità durante la fase lineare, quando si utilizza la modalità di transizione 'hausdorff'.
> * **(b)** Le caratteristiche geometriche qualitative (es. forma complessiva, distribuzione della densità, simmetrie) dell'insieme finale di punti `all_points` dopo un numero fisso di iterazioni totali, rispetto a simulazioni avviate da un singolo punto o da configurazioni casuali, mantenendo identici tutti gli altri parametri dinamici."
**Disegno Sperimentale di Base:**
1. **Variabile Indipendente:** La lista di concetti `concepts` fornita a `map_semantic_trajectory`. Si definiranno almeno 3-4 liste diverse (es. concetti legati alla matematica, alla natura, all'arte, più una lista di termini casuali).
2. **Condizioni di Controllo:**
* Simulazione standard partendo da `R(0) = {complex(0, 0)}`.
* (Opzionale) Simulazione partendo da un piccolo set di punti casuali `R(0)`.
3. **Variabili Dipendenti:**
* **Per H1(a):** Numero di iterazioni `t` al momento della transizione dalla fase lineare (registrato se `params.transition_mode == 'hausdorff'`).
* **Per H1(b):**
* Analisi visiva qualitativa dell'output di `visualize_results`.
* (Opzionale, richiede funzioni aggiuntive) Misure quantitative semplici: Bounding Box dell'insieme `all_points`, Centroide, Varianza delle coordinate, Stima approssimativa della Dimensione Frattale (es. Box Counting se implementato).
4. **Parametri Fissi:** Si manterranno costanti `iterations`, `lambda_linear`, parametri IFS (`scale_factor_A/B`, `offset_A/B`), `P`, `blend_iterations`, `transition_threshold` in tutte le run comparative. Si userà `transition_mode='hausdorff'` per testare H1(a), e un numero fisso di `iterations` per H1(b).
5. **Procedura:**
* Eseguire la funzione `map_semantic_trajectory` per ogni lista di concetti per generare gli `R0`.
* Eseguire `run_full_simulation` per ogni `R0` (e per le condizioni di controllo), passando `R0` alla funzione `initialize_system` (che andrà modificata per accettarlo).
* Raccogliere i dati sulle variabili dipendenti (tempo di transizione, plot finali).
6. **Analisi:** Confrontare i tempi di transizione tra le diverse condizioni iniziali. Analizzare visivamente e (se possibile) quantitativamente le differenze nei pattern finali generati.
Questo esperimento mira a verificare se la "storia" o la "struttura semantica" iniziale impressa nel sistema tramite `R0` lascia un'impronta duratura sulla sua evoluzione dinamica e sulla configurazione finale.
## 3. Architettura e Componenti del Framework (Basato su v4.0 con Miglioramenti)
Il framework è implementato come un modulo Python con i seguenti componenti principali:
# -*- coding: utf-8 -*-
"""
Title: D-ND Hybrid Simulation Framework
Version: 4.1 (Consolidated & Documented)
Author: Meta Master 3 (Refined by ACS)
Description:
Simulates the D-ND (Dual-Non-Dual) cognitive-logical model to explore
the emergence and transformation of coherent structures (R*) in the
complex plane. Integrates linear, blended, and fractal transformation
phases, supports semantic input mapping, Φ-transformations from textual
prompts (basic), and configurable transition logic. Designed as a testbed
for hypotheses about dynamics in abstract cognitive-informational systems.
"""
import random
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import directed_hausdorff
import time
import logging
# Optional: For potential future complexity measures
# from scipy.spatial import ConvexHull
# Initialize logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
# ============================================================
# 1. SYSTEM CONFIGURATION & PARAMETERS
# ============================================================
class SystemParameters:
"""
Encapsulates all configuration parameters for a D-ND simulation run.
"""
def __init__(self,
# Simulation Control
iterations=10000, # Total simulation steps
# Phase Parameters
lambda_linear=0.1, # Strength of linear contraction towards P
P=complex(0.5, 0.5), # Attractor point for linear phase
blend_iterations=50, # Duration of the blend phase
# Fractal (IFS-like) Parameters
scale_factor_A=0.5, # Scaling factor for transformation T_A
scale_factor_B=0.5, # Scaling factor for transformation T_B
offset_A=complex(0, 0.5), # Offset for transformation T_A
offset_B=complex(0.5, 0), # Offset for transformation T_B
# Transition Logic
transition_mode='hausdorff', # 'hausdorff' or 'time'
transition_threshold=0.005, # Hausdorff distance threshold for stability
time_transition_iteration=100, # Iteration count for time-based transition
# Semantic / Custom Transformations (Φ)
generated_phi=None, # List of Transformation objects (see below)
# Unused / Reserved Parameters (Future Use / Legacy)
alpha=0.4, beta=0.4, gamma=0.2 # Currently unused - document or remove if obsolete
):
self.iterations = iterations
self.transition_threshold = transition_threshold
self.lambda_linear = lambda_linear
self.P = P
# Ensure blend_iterations doesn't exceed total iterations
self.blend_iterations = min(blend_iterations, iterations)
self.scale_factor_A = scale_factor_A
self.scale_factor_B = scale_factor_B
self.offset_A = offset_A
self.offset_B = offset_B
self.transition_mode = transition_mode
self.time_transition_iteration = time_transition_iteration
# Store generated_phi as a list of Transformation objects
self.generated_phi = generated_phi if generated_phi else []
# Document unused parameters clearly
self.alpha = alpha # RESERVED/UNUSED in v4.1
self.beta = beta # RESERVED/UNUSED in v4.1
self.gamma = gamma # RESERVED/UNUSED in v4.1
# Validation (Optional but Recommended)
if self.transition_mode not in ['hausdorff', 'time']:
raise ValueError("transition_mode must be 'hausdorff' or 'time'")
logging.info(f"SystemParameters initialized: {self.__dict__}")
# ============================================================
# 2. TRANSFORMATION ABSTRACTION (Φ)
# ============================================================
class Transformation:
"""
Represents a generic transformation function Φ(z) with potential parameters.
Useful for encapsulating custom logic, including those generated from text.
"""
def __init__(self, func, name="unnamed_phi", **kwargs):
self.func = func
self.name = name
self.kwargs = kwargs # Store any parameters needed by func
logging.debug(f"Transformation '{self.name}' created.")
def apply(self, z, params=None):
"""Applies the transformation function to a complex number z."""
# Pass relevant system parameters if needed by the function, along with specific kwargs
# This requires the wrapped function 'func' to potentially accept 'params' or specific kwargs
try:
# Simplest case: func only needs z and its own kwargs
return self.func(z, **self.kwargs)
# More complex case (if func needs system params):
# return self.func(z, params=params, **self.kwargs)
except Exception as e:
logging.error(f"Error applying transformation '{self.name}' to {z}: {e}")
return z # Return original point on error, or handle differently
# ============================================================
# 3. CORE SIMULATION LOGIC & PHASES
# ============================================================
def initialize_system(params, R0=None):
"""
Initializes the simulation state.
Accepts an optional initial set of points R0.
"""
if R0 is None or not isinstance(R0, set) or not R0:
R = {complex(0, 0)} # Default start: single point at origin
logging.info("Initialized system from default origin.")
else:
R = R0.copy() # Use the provided initial set
logging.info(f"Initialized system with R0 containing {len(R)} points.")
all_points = set(R) # Accumulates all points generated over time
R_time_series = [R.copy()] # Stores the state R at each time step
# Simulation state flags
current_phase = 'linear' # Start with linear phase
blend_counter = 0
transition_occurred = False
# Timing and Data Collection
start_time = time.time()
transition_info = {'time': None, 'iteration': None} # Store transition details
# Placeholder for potentially richer data collection
simulation_log = [{'t': 0, 'phase': current_phase, '|R|': len(R)}]
return R, all_points, R_time_series, current_phase, blend_counter, transition_occurred, transition_info, simulation_log, start_time
# --- Phase Implementation Functions ---
def T_A(z, params):
"""Fractal Transformation A (IFS-like)."""
return z * params.scale_factor_A + params.offset_A
def T_B(z, params):
"""Fractal Transformation B (IFS-like)."""
# Example correction: Ensure consistent application order if intended
return (z * params.scale_factor_B) + params.offset_B # Adjust if (z + offset)*scale was intended
def run_linear_phase(R, params):
"""Applies the linear contraction towards P."""
# Formula: z_next = z + lambda * (P - z) = (1 - lambda)*z + lambda*P
lambda_eff = params.lambda_linear
P = params.P
R_next = {(1 - lambda_eff) * z + lambda_eff * P for z in R}
# Potential check for numerical stability if lambda is large
return R_next
def run_fractal_phase(R, params):
"""Applies the stochastic fractal transformations (T_A, T_B)."""
R_next = set()
# Simple stochastic IFS: 50/50 chance for T_A or T_B for each point
# Consider alternative IFS types if needed (e.g., deterministic IFS)
for z in R:
if random.random() < 0.5:
R_next.add(T_A(z, params))
else:
R_next.add(T_B(z, params))
return R_next
def run_blended_phase(R, params, blend_factor):
"""
Applies a probabilistic blend of linear and fractal transformations.
Blend_factor (0 to 1): Probability of applying linear vs fractal logic.
Note: This implementation blends the *results*, not the functions directly.
Consider alternative blending methods if needed.
"""
linear_next = run_linear_phase(R, params)
fractal_next = run_fractal_phase(R, params)
blended = set()
# Sample from linear results based on blend factor
for z_lin in linear_next:
if random.random() < blend_factor:
blended.add(z_lin)
# Sample from fractal results based on inverse blend factor
for z_frac in fractal_next:
if random.random() < (1.0 - blend_factor):
blended.add(z_frac)
# Ensure non-empty set if both sampling fail (unlikely with many points)
if not blended and (linear_next or fractal_next):
blended = linear_next.union(fractal_next) # Fallback: combine all
return blended
def run_generated_phi_phase(R, params):
"""Applies the list of custom Φ transformations."""
if not params.generated_phi:
return R # No custom transformations defined
R_next = set()
# Apply each Φ transformation to the current set R
# This currently means the output size can grow significantly if multiple Phis exist
# Consider alternatives: stochastic selection of one Phi, sequential application, etc.
for phi_transform in params.generated_phi:
try:
# Apply the transformation to each point in the current set R
transformed_points = {phi_transform.apply(z, params) for z in R}
R_next.update(transformed_points)
except Exception as e:
logging.error(f"Error applying custom transformation '{phi_transform.name}': {e}")
# Handle empty set possibility if all transformations failed
if not R_next and R:
return R # Fallback to previous state if all Φ failed
return R_next
# --- Stability Check ---
def check_stability(R_prev, R_curr, params):
"""Checks if the system has stabilized based on Hausdorff distance."""
if not R_prev or not R_curr or len(R_prev) == 0 or len(R_curr) == 0:
logging.warning("Cannot compute Hausdorff distance on empty sets.")
return False # Cannot determine stability if one set is empty
# Convert sets of complex numbers to NumPy arrays of (real, imag) coordinates
try:
arr_prev = np.array([(z.real, z.imag) for z in R_prev])
arr_curr = np.array([(z.real, z.imag) for z in R_curr])
# directed_hausdorff returns (distance, index_in_u, index_in_v)
dist1 = directed_hausdorff(arr_prev, arr_curr)[0]
dist2 = directed_hausdorff(arr_curr, arr_prev)[0]
hausdorff_distance = max(dist1, dist2)
logging.debug(f"Hausdorff Distance: {hausdorff_distance:.6f}")
return hausdorff_distance < params.transition_threshold
except Exception as e:
logging.error(f"Error calculating Hausdorff distance: {e}")
# Decide how to handle calculation errors - assume not stable?
return False
# ============================================================
# 4. MAIN SIMULATION ORCHESTRATOR
# ============================================================
def run_full_simulation(params, R0=None):
"""
Orchestrates the full D-ND simulation through its phases.
"""
R, all_points, R_time_series, current_phase, blend_counter, \
transition_occurred, transition_info, simulation_log, start_time = initialize_system(params, R0)
logging.info(f"Starting simulation: {params.iterations} iterations, initial phase: {current_phase}")
for t in range(1, params.iterations + 1):
R_prev = R.copy() # Store previous state for stability check
# --- Phase Logic ---
if current_phase == 'linear':
R = run_linear_phase(R, params)
# Check for transition
transition_condition = False
if params.transition_mode == 'hausdorff':
transition_condition = check_stability(R_prev, R, params)
elif params.transition_mode == 'time':
transition_condition = (t >= params.time_transition_iteration)
if transition_condition and not transition_occurred:
current_phase = 'blend'
transition_occurred = True
transition_info['time'] = time.time() - start_time
transition_info['iteration'] = t
logging.info(f"Transition triggered at t={t}. Mode: {params.transition_mode}. Moving to Blend phase.")
elif current_phase == 'blend':
if blend_counter < params.blend_iterations:
# Blend factor increases from 0 to nearly 1 during the blend phase
blend_factor = blend_counter / params.blend_iterations
R = run_blended_phase(R_prev, params, blend_factor) # Use R_prev for blending from pre-blend state? Check logic. Maybe R.
blend_counter += 1
else:
# Decide next phase after blending finishes
if params.generated_phi:
current_phase = 'generated_phi'
logging.info(f"Blend phase complete at t={t}. Moving to Generated Phi phase.")
else:
current_phase = 'fractal'
logging.info(f"Blend phase complete at t={t}. No Phi defined. Moving to Fractal phase.")
elif current_phase == 'generated_phi':
R = run_generated_phi_phase(R, params)
# Note: This phase currently runs indefinitely until iterations end.
# Consider adding logic to transition out of it (e.g., after N steps, or to fractal)
elif current_phase == 'fractal':
R = run_fractal_phase(R, params)
# Fractal phase continues until the end of iterations
# --- State Update & Logging ---
if not R:
logging.warning(f"Iteration t={t}: Resultant set R is empty! Check phase logic or parameters.")
# Decide how to handle empty R - stop simulation? Reset? Use R_prev?
R = R_prev # Fallback to prevent error propagation
# break # Option: Stop simulation
all_points.update(R)
R_time_series.append(R.copy())
simulation_log.append({'t': t, 'phase': current_phase, '|R|': len(R)})
if t % 100 == 0: # Log progress periodically
logging.debug(f"t={t}, Phase: {current_phase}, |R|={len(R)}, |All|={len(all_points)}")
end_time = time.time()
total_time = end_time - start_time
logging.info(f"Simulation finished. Total time: {total_time:.2f} seconds.")
results = {
"parameters": params,
"final_R": R,
"all_points": all_points,
"R_time_series": R_time_series,
"simulation_log": simulation_log,
"transition_info": transition_info,
"total_time": total_time
}
return results
# ============================================================
# 5. SEMANTIC EXTENSIONS (Initialization & Φ Generation)
# ============================================================
def map_semantic_trajectory(concepts, method='circle'):
"""
Maps a list of semantic concepts to an initial set of points R0 in the complex plane.
Parameters:
concepts (list): List of strings (semantic nodes).
method (str): 'circle' (points on unit circle) or 'spiral' (points on outward spiral).
Returns:
set: R0, the initial configuration of complex points.
"""
R0 = set()
n = len(concepts)
if n == 0:
return R0
logging.info(f"Mapping {n} concepts using '{method}' method.")
if method == 'circle':
angle_step = 2 * np.pi / n
radius = 1.0
for i, concept in enumerate(concepts):
angle = i * angle_step
# Map concept to a point on the unit circle
point = radius * np.exp(1j * angle) # complex(cos(angle), sin(angle))
R0.add(point)
# Optional: Associate concept name with point? (Requires different return type)
elif method == 'spiral':
angle_step = np.pi / 6 # Fixed angle step (e.g., 30 degrees)
radius_step = 0.5 / n # Radius increases slightly for each concept
current_angle = 0
current_radius = 0.5 # Starting radius
for i, concept in enumerate(concepts):
point = current_radius * np.exp(1j * current_angle)
R0.add(point)
current_angle += angle_step
current_radius += radius_step
else:
raise ValueError("Unsupported mapping method. Choose 'circle' or 'spiral'.")
return R0
def generate_phi_from_text_basic(prompt_text, P=complex(0.5, 0.5), lambda_linear=0.1):
"""
[Placeholder] Generates a Transformation object based on simple keywords in text.
This is a basic implementation, NOT using advanced NLP/GPT.
Returns:
Transformation: A Transformation object wrapping the generated function.
"""
prompt_lower = prompt_text.lower()
phi_func = None
phi_name = f"phi_from:'{prompt_text[:20]}...'"
# Simple keyword matching logic
if "opposti" in prompt_lower and "attrazione" in prompt_lower:
# Linear attraction towards P (same as linear phase func)
def func(z, P_val=P, lambda_val=lambda_linear):
return (1 - lambda_val) * z + lambda_val * P_val
phi_func = func
phi_name = "phi_attraction_to_P"
elif "riflessione" in prompt_lower:
# Reflection through point P
def func(z, P_val=P):
return P_val - (z - P_val) # Reflect vector z-P relative to P
phi_func = func
phi_name = "phi_reflection_at_P"
elif "espansione" in prompt_lower:
# Expansion from origin (or could be from P)
scale_factor = 1 + lambda_linear # Use lambda_linear as expansion factor?
def func(z, scale=scale_factor):
return z * scale
phi_func = func
phi_name = "phi_expansion_origin"
# Add more keyword rules here...
else:
# Default: Identity transformation if no keywords match
def func(z):
return z
phi_func = func
phi_name = "phi_identity_default"
logging.info(f"Generated basic transformation '{phi_name}' from text prompt.")
# Pass relevant parameters (P, lambda_linear) if needed by the specific func
# This requires careful handling of kwargs in Transformation.apply
# For now, assume P, lambda are baked into the lambda/def if needed, or passed via apply later.
return Transformation(phi_func, name=phi_name)
# ============================================================
# 6. ANALYSIS & VISUALIZATION TOOLS
# ============================================================
def visualize_results(results, show_trajectory=False, save_path=None):
"""
Visualizes the final state (all points) and optionally the time trajectory.
"""
all_points = results.get("all_points", set())
R_time_series = results.get("R_time_series", [])
params = results.get("parameters", None)
title_suffix = f" (λ={params.lambda_linear}, N={params.iterations})" if params else ""
if not all_points:
logging.warning("No points generated to visualize.")
return
plt.figure(figsize=(10, 10))
# Plot all generated points (the final "field")
x_vals = [z.real for z in all_points]
y_vals = [z.imag for z in all_points]
plt.scatter(x_vals, y_vals, s=1, alpha=0.5, label="All Points (Final Field)")
# Optionally plot the trajectory of the center of mass or specific points
if show_trajectory and R_time_series:
centroids_x = []
centroids_y = []
for R_t in R_time_series:
if R_t:
centroid = np.mean([(z.real, z.imag) for z in R_t], axis=0)
centroids_x.append(centroid[0])
centroids_y.append(centroid[1])
if centroids_x:
plt.plot(centroids_x, centroids_y, 'r-', lw=1, alpha=0.7, label="Centroid Trajectory")
plt.title(f"D-ND Simulation Result{title_suffix}")
plt.xlabel("Re(z) - Logico-Assiomatico X")
plt.ylabel("Im(z) - Logico-Assiomatico Y")
plt.grid(True)
plt.gca().set_aspect('equal', adjustable='box') # Ensure correct aspect ratio
plt.legend()
plt.tight_layout()
if save_path:
plt.savefig(save_path)
logging.info(f"Visualization saved to {save_path}")
else:
plt.show()
def analyze_density_over_time(results, save_path=None):
"""Plots the number of points in R(t) over time."""
simulation_log = results.get("simulation_log", [])
if not simulation_log:
logging.warning("No simulation log found for density analysis.")
return
times = [log['t'] for log in simulation_log]
densities = [log['|R|'] for log in simulation_log]
phases = [log['phase'] for log in simulation_log] # Get phase info
plt.figure(figsize=(12, 5))
plt.plot(times, densities, label="|R(t)|")
# Optional: Indicate phase changes on the plot
last_phase = None
for i, log_entry in enumerate(simulation_log):
if log_entry['phase'] != last_phase and i > 0:
plt.axvline(x=log_entry['t'], color='grey', linestyle='--', lw=0.8, label=f"{log_entry['phase']} Start" if last_phase is None else None)
# Avoid duplicate labels
if last_phase is not None:
plt.text(log_entry['t'] + 5, max(densities)*0.9, log_entry['phase'], rotation=90, verticalalignment='top', fontsize=8)
last_phase = log_entry['phase']
plt.xlabel("Iteration (t)")
plt.ylabel("Number of points in R(t)")
plt.title("Density |R(t)| Evolution Over Time")
plt.grid(True)
# plt.legend() # Legend can become crowded with phase changes
plt.tight_layout()
if save_path:
plt.savefig(save_path)
logging.info(f"Density plot saved to {save_path}")
else:
plt.show()
def export_results_to_file(results, filename="dnd_simulation_results.txt"):
"""Exports key simulation results and time series to a text file."""
with open(filename, 'w') as f:
f.write("=== D-ND Simulation Results ===\n")
# Parameters
f.write("\n--- Parameters ---\n")
params = results.get("parameters")
if params:
for key, value in params.__dict__.items():
f.write(f"{key}: {value}\n")
else:
f.write("Parameters not found.\n")
# Transition Info
f.write("\n--- Transition Info ---\n")
transition_info = results.get("transition_info", {})
if transition_info.get('iteration') is not None:
f.write(f"Transition Iteration: {transition_info['iteration']}\n")
f.write(f"Transition Time (s): {transition_info['time']:.4f}\n")
else:
f.write("No transition recorded (or mode='time' beyond iterations).\n")
# Final State
f.write("\n--- Final State ---\n")
f.write(f"Total Iterations: {params.iterations if params else 'N/A'}\n")
f.write(f"Total Simulation Time (s): {results.get('total_time', 'N/A'):.4f}\n")
f.write(f"Final |R| size: {len(results.get('final_R', set()))}\n")
f.write(f"Total unique points generated |all_points|: {len(results.get('all_points', set()))}\n")
# Time Series R(t) - Export only subset for brevity? Or full?
f.write("\n--- R(t) Time Series (Sampled) ---\n")
R_time_series = results.get("R_time_series", [])
sample_freq = max(1, len(R_time_series) // 100) # Sample ~100 points
for t, R in enumerate(R_time_series):
if t % sample_freq == 0:
points_str = ', '.join([f"({z.real:.4f}+{z.imag:.4f}j)" for z in R])
f.write(f"t={t}: |R|={len(R)} | Points: {points_str}\n") # Limit points string if too long?
logging.info(f"Exported simulation results to {filename}")
# ============================================================
# 7. EXAMPLE USAGE & EXPERIMENT EXECUTION
# ============================================================
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO) # Ensure logging is active
# --- Configuration ---
params = SystemParameters(
iterations=500, # Keep low for quick tests
lambda_linear=0.05,
P=complex(0.3, 0.7),
blend_iterations=20,
offset_A=complex(0.1, 0.6),
offset_B=complex(0.7, 0.2),
scale_factor_A=0.6,
scale_factor_B=0.4,
transition_mode='hausdorff', # Test Hausdorff transition
transition_threshold=0.01
)
# --- Experiment H1 Setup ---
# 1. Define concept lists
concepts_math = ["set", "point", "line", "fractal", "iteration", "limit"]
concepts_nature = ["tree", "leaf", "branch", "water", "flow", "spiral"]
concepts_art = ["color", "shape", "light", "shadow", "composition", "contrast"]
# 2. Generate initial states R0
R0_default = None # Will start from origin
R0_math = map_semantic_trajectory(concepts_math, method='circle')
R0_nature = map_semantic_trajectory(concepts_nature, method='spiral')
R0_art = map_semantic_trajectory(concepts_art, method='circle')
# --- Run Simulations ---
logging.info("\n--- Running Simulation: Default Start ---")
results_default = run_full_simulation(params, R0=R0_default)
visualize_results(results_default, save_path="dnd_default.png")
analyze_density_over_time(results_default, save_path="dnd_density_default.png")
export_results_to_file(results_default, filename="dnd_results_default.txt")
logging.info("\n--- Running Simulation: Math Concepts Start ---")
results_math = run_full_simulation(params, R0=R0_math)
visualize_results(results_math, save_path="dnd_math.png")
# analyze_density_over_time(results_math) # Optional extra plots
export_results_to_file(results_math, filename="dnd_results_math.txt")
logging.info("\n--- Running Simulation: Nature Concepts Start ---")
results_nature = run_full_simulation(params, R0=R0_nature)
visualize_results(results_nature, save_path="dnd_nature.png")
export_results_to_file(results_nature, filename="dnd_results_nature.txt")
# --- Basic Analysis for H1 ---
print("\n--- Hypothesis H1 Analysis (Basic) ---")
print(f"Default Start Transition Iteration: {results_default['transition_info'].get('iteration', 'N/A')}")
print(f"Math Concepts Transition Iteration: {results_math['transition_info'].get('iteration', 'N/A')}")
print(f"Nature Concepts Transition Iteration: {results_nature['transition_info'].get('iteration', 'N/A')}")
print("\nCompare the saved PNG images (dnd_default.png, dnd_math.png, dnd_nature.png) visually for differences in final patterns.")
print("Further analysis would require quantitative geometric measures.")
# --- Example with Generated Phi ---
logging.info("\n--- Running Simulation: Default Start + Generated Phi (Reflection) ---")
phi_reflection = generate_phi_from_text_basic("riflessione", P=params.P)
params_phi = SystemParameters(
iterations=600, # Allow more iterations for Phi phase
lambda_linear=0.05, P=complex(0.3, 0.7), blend_iterations=20,
offset_A=complex(0.1, 0.6), offset_B=complex(0.7, 0.2),
scale_factor_A=0.6, scale_factor_B=0.4,
transition_mode='time', # Use time transition before Phi
time_transition_iteration=100,
generated_phi=[phi_reflection] # Add the generated transformation
)
results_phi = run_full_simulation(params_phi, R0=None)
visualize_results(results_phi, save_path="dnd_phi_reflection.png")
export_results_to_file(results_phi, filename="dnd_results_phi.txt")
## 4. Conclusione e Sviluppi Futuri
Questa versione 4.1 del framework D-ND fornisce una base solida e flessibile per esplorare la dinamica dei sistemi logico-cognitivi astratti. L'architettura consolidata, la logica di transizione configurabile e l'integrazione (seppur basilare) di elementi semantici ne aumentano l'utilità come strumento di ricerca.
**Possibili Sviluppi Futuri:**
1. **Implementazione Avanzata di `generate_phi_from_text`:** Sostituire la logica a keyword con una vera integrazione NLP/LLM (es. API GPT) per generare dinamicamente codice Python per funzioni `Φ` complesse da prompt in linguaggio naturale. Questo sbloccherebbe un potenziale esplorativo enorme ma richiede attenzione alla sicurezza (`eval`/`exec`) e al prompt engineering.
2. **Misure Quantitative di Complessità/Struttura:** Integrare funzioni per calcolare metriche quantitative sui set di punti `R` o `all_points` (es. Dimensione Frattale (Box Counting), Convex Hull, misure di clustering, entropia spaziale) per un'analisi più rigorosa dei risultati e per testare ipotesi più specifiche (come H1(b)).
3. **Interfaccia Utente Grafica (GUI):** Sviluppare una GUI (es. con Tkinter, PyQt, Kivy, o web-based con Streamlit/Dash) per facilitare l'impostazione dei parametri, l'esecuzione delle simulazioni e la visualizzazione interattiva dei risultati.
4. **Ottimizzazione delle Prestazioni:** Profilare il codice per identificare colli di bottiglia (specialmente `check_stability` e la gestione di set `R` molto grandi) ed esplorare ottimizzazioni (es. Numba, Cython, algoritmi geometrici più efficienti).
5. **Esplorazione Teorica Approfondita:** Utilizzare il framework per indagare sistematicamente le implicazioni della teoria D-ND sottostante, mappando più esplicitamente i parametri e le fasi del modello ai concetti teorici (Nulla-Tutto, coerenza, dualità/non-dualità).
6. **Modelli Alternativi di Blend/IFS:** Sperimentare con diverse modalità di blending tra le fasi o con sistemi IFS più complessi (es. non lineari, con memoria).
---