Mathematics & Computation

NTK+Curriculum Learning 본문

PINNs

NTK+Curriculum Learning

hyxunn 2025. 9. 7. 13:30

<기본설정>

 

model = PINN([2, 50, 50, 50, 1]).to(device)
step = 20
a1, a2 = 1,1
 
LAMBDA_BC = 1.0
LAMBDA_PDE = 1.0
LAMBDA_IC = 5.0 # anchor를 쓰는 경우
 
# 손실 총 스케일(항상 고정): 예시 60.0
SUM_W = 60.0

 

 

1.

K_SCHEDULE = [
(0, 1.0),
(5000, 3.0),
(10000, 6.0),
(15000, 10.0),
]

 

 

L2 abs: 1.566e-02 | L2 rel: 1.566e-02 H1 abs: 1.632e-01 | H1 rel: 3.531e-02 L_inf abs:2.810e-02

 

 

2.

K_SCHEDULE = [
(0, 1.0),
(10000, 10.0),
]

 

L2 abs: 2.551e-02 | L2 rel: 2.551e-02 H1 abs: 2.523e-01 | H1 rel: 5.458e-02 L_inf abs:3.075e-02

 

 


k=1~k=10까지 각 epoch 10000으로 학습 시킬때

 

L2 abs: 9.518e-03 | L2 rel: 9.518e-03 H1 abs: 9.406e-02 | H1 rel: 2.035e-02 L_inf abs:2.547e-02

 

Using device: cuda ============================================================ === Train fixed k=1 for 10000 epochs | save to: best_model_k1.pth ===

[k=1.0] Epoch 0 | Loss 7.935e+01 | PDE 7.924e+01 | BC 1.068e-01 | λ_PDE=1.00, λ_BC=1.00

[k=1.0] Epoch 1000 | Loss 8.159e-01 | PDE 9.222e-03 | BC 2.916e-02 | λ_PDE=46.83, λ_BC=13.17

[k=1.0] Epoch 2000 | Loss 2.993e-01 | PDE 2.087e-03 | BC 2.049e-02 | λ_PDE=50.54, λ_BC=9.46

[k=1.0] Epoch 3000 | Loss 1.931e-01 | PDE 1.078e-03 | BC 1.722e-02 | λ_PDE=52.04, λ_BC=7.96

[k=1.0] Epoch 4000 | Loss 1.466e-01 | PDE 7.199e-04 | BC 1.507e-02 | λ_PDE=52.80, λ_BC=7.20

[k=1.0] Epoch 5000 | Loss 1.360e-01 | PDE 8.379e-04 | BC 1.346e-02 | λ_PDE=53.21, λ_BC=6.79

[k=1.0] Epoch 6000 | Loss 9.950e-02 | PDE 3.785e-04 | BC 1.206e-02 | λ_PDE=53.43, λ_BC=6.57

[k=1.0] Epoch 7000 | Loss 8.739e-02 | PDE 3.071e-04 | BC 1.100e-02 | λ_PDE=53.55, λ_BC=6.45

[k=1.0] Epoch 8000 | Loss 7.644e-02 | PDE 2.310e-04 | BC 1.008e-02 | λ_PDE=53.64, λ_BC=6.36

[k=1.0] Epoch 9000 | Loss 9.822e-02 | PDE 7.309e-04 | BC 9.358e-03 | λ_PDE=53.70, λ_BC=6.30

[k=1.0] Epoch 10000 | Loss 6.540e-02 | PDE 2.139e-04 | BC 8.600e-03 | λ_PDE=53.73, λ_BC=6.27

[k=1] stage best loss = 6.433e-02

============================================================ === Train fixed k=2 for 10000 epochs | save to: best_model_k2.pth === Load previous checkpoint: best_model_k1.pth

[k=2.0] Epoch 0 | Loss 1.329e+00 | PDE 2.372e-02 | BC 8.619e-03 | λ_PDE=53.73, λ_BC=6.27

[k=2.0] Epoch 1000 | Loss 6.428e-02 | PDE 1.736e-04 | BC 1.083e-02 | λ_PDE=58.43, λ_BC=5.00

[k=2.0] Epoch 2000 | Loss 5.913e-02 | PDE 1.511e-04 | BC 1.008e-02 | λ_PDE=57.83, λ_BC=5.00

[k=2.0] Epoch 3000 | Loss 5.702e-02 | PDE 1.819e-04 | BC 9.326e-03 | λ_PDE=57.12, λ_BC=5.00

[k=2.0] Epoch 4000 | Loss 6.130e-02 | PDE 3.260e-04 | BC 8.584e-03 | λ_PDE=56.37, λ_BC=5.00

[k=2.0] Epoch 5000 | Loss 4.614e-02 | PDE 1.209e-04 | BC 7.883e-03 | λ_PDE=55.64, λ_BC=5.00

[k=2.0] Epoch 6000 | Loss 4.444e-02 | PDE 1.498e-04 | BC 7.241e-03 | λ_PDE=55.00, λ_BC=5.00

[k=2.0] Epoch 7000 | Loss 4.331e-02 | PDE 1.237e-04 | BC 6.630e-03 | λ_PDE=54.48, λ_BC=5.52

[k=2.0] Epoch 8000 | Loss 4.154e-02 | PDE 1.159e-04 | BC 5.990e-03 | λ_PDE=54.11, λ_BC=5.89

[k=2.0] Epoch 9000 | Loss 8.680e-02 | PDE 1.011e-03 | BC 5.261e-03 | λ_PDE=53.85, λ_BC=6.15

[k=2.0] Epoch 10000 | Loss 4.911e-02 | PDE 3.457e-04 | BC 4.831e-03 | λ_PDE=53.67, λ_BC=6.33

[k=2] stage best loss = 3.612e-02 ============================================================ === Train fixed k=3 for 10000 epochs | save to: best_model_k3.pth === Load previous checkpoint: best_model_k2.pth [k=3.0] Epoch 0 | Loss 2.341e+00 | PDE 4.305e-02 | BC 4.798e-03 | λ_PDE=53.67, λ_BC=6.33

[k=3.0] Epoch 1000 | Loss 5.201e-02 | PDE 2.186e-04 | BC 7.847e-03 | λ_PDE=58.44, λ_BC=5.00

[k=3.0] Epoch 2000 | Loss 4.678e-02 | PDE 1.956e-04 | BC 7.091e-03 | λ_PDE=57.87, λ_BC=5.00

[k=3.0] Epoch 3000 | Loss 4.287e-02 | PDE 1.897e-04 | BC 6.403e-03 | λ_PDE=57.22, λ_BC=5.00

[k=3.0] Epoch 4000 | Loss 4.225e-02 | PDE 2.355e-04 | BC 5.787e-03 | λ_PDE=56.55, λ_BC=5.00

[k=3.0] Epoch 5000 | Loss 2.505e-01 | PDE 4.003e-03 | BC 5.330e-03 | λ_PDE=55.93, λ_BC=5.00

[k=3.0] Epoch 6000 | Loss 3.314e-02 | PDE 1.703e-04 | BC 4.740e-03 | λ_PDE=55.40, λ_BC=5.00

[k=3.0] Epoch 7000 | Loss 6.686e-02 | PDE 8.242e-04 | BC 4.295e-03 | λ_PDE=54.98, λ_BC=5.02

[k=3.0] Epoch 8000 | Loss 2.988e-02 | PDE 1.712e-04 | BC 3.851e-03 | λ_PDE=54.67, λ_BC=5.33

[k=3.0] Epoch 9000 | Loss 3.958e-02 | PDE 3.794e-04 | BC 3.421e-03 | λ_PDE=54.47, λ_BC=5.53

[k=3.0] Epoch 10000 | Loss 3.214e-02 | PDE 2.736e-04 | BC 3.047e-03 | λ_PDE=54.33, λ_BC=5.67

[k=3] stage best loss = 2.588e-02 ============================================================ === Train fixed k=4 for 10000 epochs | save to: best_model_k4.pth === Load previous checkpoint: best_model_k3.pth [k=4.0] Epoch 0 | Loss 3.209e+00 | PDE 5.875e-02 | BC 3.045e-03 | λ_PDE=54.33, λ_BC=5.67

[k=4.0] Epoch 1000 | Loss 8.299e-02 | PDE 9.560e-04 | BC 5.409e-03 | λ_PDE=58.52, λ_BC=5.00

[k=4.0] Epoch 2000 | Loss 4.647e-02 | PDE 3.924e-04 | BC 4.737e-03 | λ_PDE=58.06, λ_BC=5.00

[k=4.0] Epoch 3000 | Loss 7.311e-02 | PDE 9.047e-04 | BC 4.201e-03 | λ_PDE=57.60, λ_BC=5.00

[k=4.0] Epoch 4000 | Loss 5.381e-02 | PDE 6.152e-04 | BC 3.729e-03 | λ_PDE=57.15, λ_BC=5.00

[k=4.0] Epoch 5000 | Loss 3.345e-02 | PDE 2.943e-04 | BC 3.349e-03 | λ_PDE=56.78, λ_BC=5.00

[k=4.0] Epoch 6000 | Loss 3.122e-02 | PDE 2.842e-04 | BC 3.034e-03 | λ_PDE=56.48, λ_BC=5.00

[k=4.0] Epoch 7000 | Loss 3.882e-02 | PDE 4.432e-04 | BC 2.778e-03 | λ_PDE=56.26, λ_BC=5.00

[k=4.0] Epoch 8000 | Loss 2.739e-02 | PDE 2.611e-04 | BC 2.547e-03 | λ_PDE=56.10, λ_BC=5.00

[k=4.0] Epoch 9000 | Loss 2.633e-02 | PDE 2.600e-04 | BC 2.354e-03 | λ_PDE=55.99, λ_BC=5.00

[k=4.0] Epoch 10000 | Loss 6.475e-02 | PDE 9.633e-04 | BC 2.176e-03 | λ_PDE=55.92, λ_BC=5.00

[k=4] stage best loss = 2.417e-02 ============================================================ === Train fixed k=5 for 10000 epochs | save to: best_model_k5.pth === Load previous checkpoint: best_model_k4.pth [k=5.0] Epoch 0 | Loss 1.027e+01 | PDE 1.834e-01 | BC 2.195e-03 | λ_PDE=55.92, λ_BC=5.00

[k=5.0] Epoch 1000 | Loss 3.824e-02 | PDE 4.748e-04 | BC 2.080e-03 | λ_PDE=58.64, λ_BC=5.00

[k=5.0] Epoch 2000 | Loss 3.054e-02 | PDE 3.493e-04 | BC 2.032e-03 | λ_PDE=58.33, λ_BC=5.00

[k=5.0] Epoch 3000 | Loss 2.438e-01 | PDE 4.031e-03 | BC 1.971e-03 | λ_PDE=58.04, λ_BC=5.00

[k=5.0] Epoch 4000 | Loss 2.495e-02 | PDE 2.650e-04 | BC 1.927e-03 | λ_PDE=57.80, λ_BC=5.00

[k=5.0] Epoch 5000 | Loss 3.235e-02 | PDE 3.976e-04 | BC 1.887e-03 | λ_PDE=57.62, λ_BC=5.00

[k=5.0] Epoch 6000 | Loss 3.184e-02 | PDE 3.928e-04 | BC 1.853e-03 | λ_PDE=57.48, λ_BC=5.00

[k=5.0] Epoch 7000 | Loss 2.160e-02 | PDE 2.192e-04 | BC 1.804e-03 | λ_PDE=57.38, λ_BC=5.00

[k=5.0] Epoch 8000 | Loss 8.805e-02 | PDE 1.381e-03 | BC 1.773e-03 | λ_PDE=57.32, λ_BC=5.00

[k=5.0] Epoch 9000 | Loss 2.077e-02 | PDE 2.110e-04 | BC 1.738e-03 | λ_PDE=57.28, λ_BC=5.00

[k=5.0] Epoch 10000 | Loss 3.099e-02 | PDE 3.914e-04 | BC 1.716e-03 | λ_PDE=57.25, λ_BC=5.00

[k=5] stage best loss = 1.992e-02 ============================================================ === Train fixed k=6 for 10000 epochs | save to: best_model_k6.pth === Load previous checkpoint: best_model_k5.pth [k=6.0] Epoch 0 | Loss 2.397e+01 | PDE 4.185e-01 | BC 1.718e-03 | λ_PDE=57.25, λ_BC=5.00

[k=6.0] Epoch 1000 | Loss 9.702e-02 | PDE 1.539e-03 | BC 1.317e-03 | λ_PDE=58.76, λ_BC=5.00

[k=6.0] Epoch 2000 | Loss 6.689e-02 | PDE 1.044e-03 | BC 1.145e-03 | λ_PDE=58.57, λ_BC=5.00

[k=6.0] Epoch 3000 | Loss 3.899e-01 | PDE 6.580e-03 | BC 1.099e-03 | λ_PDE=58.41, λ_BC=5.00

[k=6.0] Epoch 4000 | Loss 1.163e-01 | PDE 1.905e-03 | BC 1.048e-03 | λ_PDE=58.29, λ_BC=5.00

[k=6.0] Epoch 5000 | Loss 2.225e-01 | PDE 3.733e-03 | BC 1.051e-03 | λ_PDE=58.20, λ_BC=5.00

[k=6.0] Epoch 6000 | Loss 5.883e-01 | PDE 1.003e-02 | BC 1.010e-03 | λ_PDE=58.15, λ_BC=5.00

[k=6.0] Epoch 7000 | Loss 4.085e-02 | PDE 6.192e-04 | BC 9.739e-04 | λ_PDE=58.11, λ_BC=5.00

[k=6.0] Epoch 8000 | Loss 3.935e-02 | PDE 5.960e-04 | BC 9.458e-04 | λ_PDE=58.08, λ_BC=5.00

[k=6.0] Epoch 9000 | Loss 3.789e-02 | PDE 5.722e-04 | BC 9.337e-04 | λ_PDE=58.06, λ_BC=5.00

[k=6.0] Epoch 10000 | Loss 4.854e-02 | PDE 7.565e-04 | BC 9.241e-04 | λ_PDE=58.05, λ_BC=5.00

[k=6] stage best loss = 3.671e-02 ============================================================ === Train fixed k=7 for 10000 epochs | save to: best_model_k7.pth === Load previous checkpoint: best_model_k6.pth [k=7.0] Epoch 0 | Loss 8.712e+00 | PDE 1.500e-01 | BC 9.290e-04 | λ_PDE=58.05, λ_BC=5.00

[k=7.0] Epoch 1000 | Loss 8.681e-02 | PDE 1.430e-03 | BC 5.202e-04 | λ_PDE=58.86, λ_BC=5.00

[k=7.0] Epoch 2000 | Loss 4.093e-01 | PDE 6.920e-03 | BC 5.120e-04 | λ_PDE=58.77, λ_BC=5.00

[k=7.0] Epoch 3000 | Loss 2.127e-01 | PDE 3.579e-03 | BC 5.167e-04 | λ_PDE=58.70, λ_BC=5.00

[k=7.0] Epoch 4000 | Loss 6.049e-02 | PDE 9.871e-04 | BC 5.181e-04 | λ_PDE=58.66, λ_BC=5.00

[k=7.0] Epoch 5000 | Loss 4.032e-01 | PDE 6.833e-03 | BC 5.271e-04 | λ_PDE=58.62, λ_BC=5.00

[k=7.0] Epoch 6000 | Loss 1.269e+00 | PDE 2.160e-02 | BC 5.525e-04 | λ_PDE=58.60, λ_BC=5.00

[k=7.0] Epoch 7000 | Loss 5.299e-02 | PDE 8.587e-04 | BC 5.353e-04 | λ_PDE=58.59, λ_BC=5.00

[k=7.0] Epoch 8000 | Loss 5.850e-02 | PDE 9.524e-04 | BC 5.412e-04 | λ_PDE=58.58, λ_BC=5.00

[k=7.0] Epoch 9000 | Loss 5.026e-02 | PDE 8.116e-04 | BC 5.449e-04 | λ_PDE=58.57, λ_BC=5.00

[k=7.0] Epoch 10000 | Loss 5.104e-01 | PDE 8.666e-03 | BC 5.536e-04 | λ_PDE=58.57, λ_BC=5.00

[k=7] stage best loss = 4.927e-02 ============================================================ === Train fixed k=8 for 10000 epochs | save to: best_model_k8.pth === Load previous checkpoint: best_model_k7.pth [k=8.0] Epoch 0 | Loss 4.354e+00 | PDE 7.430e-02 | BC 5.497e-04 | λ_PDE=58.57, λ_BC=5.00

[k=8.0] Epoch 1000 | Loss 1.333e-01 | PDE 2.236e-03 | BC 2.981e-04 | λ_PDE=58.97, λ_BC=5.00

[k=8.0] Epoch 2000 | Loss 9.134e-02 | PDE 1.525e-03 | BC 2.858e-04 | λ_PDE=58.95, λ_BC=5.00

[k=8.0] Epoch 3000 | Loss 1.801e-01 | PDE 3.033e-03 | BC 2.855e-04 | λ_PDE=58.93, λ_BC=5.00

[k=8.0] Epoch 4000 | Loss 7.576e-02 | PDE 1.262e-03 | BC 2.789e-04 | λ_PDE=58.92, λ_BC=5.00

[k=8.0] Epoch 5000 | Loss 7.941e-02 | PDE 1.324e-03 | BC 2.766e-04 | λ_PDE=58.92, λ_BC=5.00

[k=8.0] Epoch 6000 | Loss 7.925e-02 | PDE 1.322e-03 | BC 2.738e-04 | λ_PDE=58.92, λ_BC=5.00

[k=8.0] Epoch 7000 | Loss 1.341e-01 | PDE 2.253e-03 | BC 2.681e-04 | λ_PDE=58.92, λ_BC=5.00

[k=8.0] Epoch 8000 | Loss 6.640e-02 | PDE 1.104e-03 | BC 2.671e-04 | λ_PDE=58.91, λ_BC=5.00

[k=8.0] Epoch 9000 | Loss 6.626e-02 | PDE 1.102e-03 | BC 2.642e-04 | λ_PDE=58.91, λ_BC=5.00

[k=8.0] Epoch 10000 | Loss 1.013e+00 | PDE 1.717e-02 | BC 2.707e-04 | λ_PDE=58.91, λ_BC=5.00

[k=8] stage best loss = 6.387e-02 ============================================================ === Train fixed k=9 for 10000 epochs | save to: best_model_k9.pth === Load previous checkpoint: best_model_k8.pth [k=9.0] Epoch 0 | Loss 2.299e+00 | PDE 3.900e-02 | BC 2.625e-04 | λ_PDE=58.91, λ_BC=5.00

[k=9.0] Epoch 1000 | Loss 1.481e-01 | PDE 2.492e-03 | BC 1.826e-04 | λ_PDE=59.06, λ_BC=5.00

[k=9.0] Epoch 2000 | Loss 7.200e-01 | PDE 1.217e-02 | BC 1.771e-04 | λ_PDE=59.09, λ_BC=5.00

[k=9.0] Epoch 3000 | Loss 1.148e-01 | PDE 1.929e-03 | BC 1.639e-04 | λ_PDE=59.11, λ_BC=5.00

[k=9.0] Epoch 4000 | Loss 1.021e+00 | PDE 1.726e-02 | BC 1.661e-04 | λ_PDE=59.12, λ_BC=5.00

[k=9.0] Epoch 5000 | Loss 6.780e+00 | PDE 1.146e-01 | BC 1.838e-04 | λ_PDE=59.13, λ_BC=5.00

[k=9.0] Epoch 6000 | Loss 1.184e+01 | PDE 2.002e-01 | BC 1.682e-04 | λ_PDE=59.14, λ_BC=5.00

[k=9.0] Epoch 7000 | Loss 7.820e-02 | PDE 1.309e-03 | BC 1.517e-04 | λ_PDE=59.14, λ_BC=5.00

[k=9.0] Epoch 8000 | Loss 1.037e-01 | PDE 1.740e-03 | BC 1.503e-04 | λ_PDE=59.14, λ_BC=5.00

[k=9.0] Epoch 9000 | Loss 4.550e-01 | PDE 7.681e-03 | BC 1.457e-04 | λ_PDE=59.14, λ_BC=5.00

[k=9.0] Epoch 10000 | Loss 9.262e-01 | PDE 1.565e-02 | BC 1.483e-04 | λ_PDE=59.14, λ_BC=5.00

[k=9] stage best loss = 7.346e-02 ============================================================ === Train fixed k=10 for 10000 epochs | save to: best_model_k10.pth === Load previous checkpoint: best_model_k9.pth [k=10.0] Epoch 0 | Loss 1.172e+00 | PDE 1.981e-02 | BC 1.457e-04 | λ_PDE=59.14, λ_BC=5.00 [k=10.0] Epoch 1000 | Loss 1.557e+01 | PDE 2.633e-01 | BC 9.305e-05 | λ_PDE=59.14, λ_BC=5.00

[k=10.0] Epoch 2000 | Loss 4.461e+00 | PDE 7.534e-02 | BC 8.731e-05 | λ_PDE=59.21, λ_BC=5.00

[k=10.0] Epoch 3000 | Loss 1.071e-01 | PDE 1.801e-03 | BC 6.639e-05 | λ_PDE=59.25, λ_BC=5.00

[k=10.0] Epoch 4000 | Loss 2.033e-01 | PDE 3.424e-03 | BC 6.518e-05 | λ_PDE=59.28, λ_BC=5.00

[k=10.0] Epoch 5000 | Loss 2.591e-01 | PDE 4.365e-03 | BC 6.274e-05 | λ_PDE=59.29, λ_BC=5.00

[k=10.0] Epoch 6000 | Loss 1.020e+01 | PDE 1.721e-01 | BC 9.535e-05 | λ_PDE=59.30, λ_BC=5.00

[k=10.0] Epoch 7000 | Loss 8.616e-02 | PDE 1.448e-03 | BC 6.115e-05 | λ_PDE=59.30, λ_BC=5.00

[k=10.0] Epoch 8000 | Loss 1.392e-01 | PDE 2.343e-03 | BC 6.080e-05 | λ_PDE=59.31, λ_BC=5.00

[k=10.0] Epoch 9000 | Loss 4.418e-01 | PDE 7.444e-03 | BC 5.988e-05 | λ_PDE=59.31, λ_BC=5.00

[k=10.0] Epoch 10000 | Loss 1.495e-01 | PDE 2.516e-03 | BC 5.903e-05 | λ_PDE=59.31, λ_BC=5.00

[k=10] stage best loss = 8.272e-02

 

k 1 2 3 4 5 6 7 8 9 10
                     
                     

'PINNs' 카테고리의 다른 글

  (0) 2025.09.10
Helmholtz 2D  (0) 2025.08.26
Helmholtz  (0) 2025.08.25
Helmholtz Equ  (0) 2025.08.17
SA-PINN+Fourier을 이용한 2차원 Poisson 방정식 해석 및 성능 향상 전략  (5) 2025.08.10