Solving 1D Burgers' Equation - Finite Difference vs Neural Network

#3: Discourse on Deep Learning Techniques for Engineering Physics

https://adarshgouda.github.io/

Motivation

The three main methods used to solve scientific equations and problems are analytical, numerical, and experimental. However, we cannot always apply the experimental technique due to financial and time restrictions. Analytical techniques are the classic approaches to problem-solving. However, we cannot solve equations analytically due to limitations imposed by complex geometry, boundary conditions, and other factors.

As a result, we have been encouraging numerical approaches for several years since they may generate answers that are virtually as trustworthy as those of analytical methods in a lot shorter and simpler amount of time. For instance, the well-known Navier-Stoke equation may be swiftly solved using Numerical Schemes even though it has never been analytically solved. On the other hand, the drawbacks of conventional numerical schemes are their difficulty in expressing irregular borders, the fact that they are not designed for unstructured meshes, and the fact that momentum, energy, and mass are not conserved. Lastly, they are oriented toward edges and one-dimensional physics and unsuitable for substantial turbulent flow issues.

A revolutionary deep learning paradigm called physics-informed neural networks (PINNs) is well suited for tackling forward and inverse issues of nonlinear partial differential equations (PDEs). Feedforward neural networks are trained as surrogate models for approximation solutions to the PDEs without the requirement for label data by including the physical information defined by PDEs in them. Numerous PINN-based approaches have been created to solve various problems, including integer-order PDEs, fractional PDEs, stochastic PDEs, and integrodifferential equations. This is due to neural networks' incredible capacity to describe complicated relationships.

A Typcial PINN Framework

In this article, I investigate the usage of PINNs as a solution approximation for 1D Burgers' Equation. I first solve the equation using Finite Difference Method and the Solve the exact same equation using a PINN.

A Simple 1D Burgers' Equation:

You can read about Burgers' Equation on its wikipedia page.

Burgers' equation in one spatial dimension looks like this:

$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$

As you can see, it is a combination of non-linear convection and diffusion. It is surprising how much you learn from this neat little equation!

$$x\in[0,2]$$$$t\in[0,0.48]$$$$\nu → Viscosity = 0.01/\pi $$

Initial Condition:

$$y(x,0)=sin(\pi x)$$

Boundary Conditions:

$$y(-1,t)=0$$$$y(1,t)=0$$

Finite Difference Method

The second-order derivative can be represented geometrically as the line tangent to the curve given by the first derivative. We will discretize the second-order derivative with a Central Difference scheme: a combination of Forward Difference and Backward Difference of the first derivative. Consider the Taylor expansion of $u_{i+1}$ and $u_{i-1}$ around $u_i$:

$u_{i+1} = u_i + \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i + \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$

$u_{i-1} = u_i - \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i - \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$

If we add these two expansions, you can see that the odd-numbered derivative terms will cancel each other out. If we neglect any terms of $O(\Delta x^4)$ or higher (and really, those are very small), then we can rearrange the sum of these two expansions to solve for our second-derivative.

$u_{i+1} + u_{i-1} = 2u_i+\Delta x^2 \frac{\partial ^2 u}{\partial x^2}\bigg|_i + O(\Delta x^4)$

Then rearrange to solve for $\frac{\partial ^2 u}{\partial x^2}\bigg|_i$ and the result is:

$$\frac{\partial ^2 u}{\partial x^2}=\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2} + O(\Delta x^2)$$

We can now write the discretized version of the Burgers equation in 1D:

$$ \frac{u^{n+1}_i - u^n_i}{\Delta t} + u_i^n \frac{u^{n}_i - u^n_{i-1}}{\Delta x} = \nu \frac{u^{n}_{i+1} -2u^n_i + u^n_{i-1}}{\Delta x^2} $$

As before, we notice that once we have an initial condition, the only unknown is $u_{i}^{n+1}$, so we re-arrange the equation solving for our unknown:

$$ u^{n+1}_i = u^n_i - u^n_i \frac{\Delta t}{\Delta x} (u^n_i - u^n_{i-1}) + \frac{\nu \Delta t}{\Delta x^2}(u^{n}_{i+1} - 2u^n_i + u^n_{i-1}) $$
In [1]:
! pip install pyDOE --quiet 
from pyDOE import lhs
#We will use Latin Hypercube Sampling from this library
  Building wheel for pyDOE (setup.py) ... done
In [2]:
import torch                           # Pytorch
import torch.autograd as autograd      # computation graph
from torch import Tensor               
import torch.nn as nn                  # neural networks
import torch.optim as optim            # optimizers 
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter

import numpy as np
import time
import pandas as pd
import matplotlib.pyplot as plt
import time, sys
from tqdm.notebook import tqdm_notebook

#Set default dtype to float32
torch.set_default_dtype(torch.float)

#PyTorch random number generator
torch.manual_seed(1234)

# Random number generators in other libraries
np.random.seed(1234)

# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cgpu')

print("The neural network will be trainied on",device)

if device == 'cuda': 
    print(torch.cuda.get_device_name())
    
%matplotlib inline 
%config InlineBackend.print_figure_kwargs = {'bbox_inches':None}
The neural network will be trainied on cuda
In [3]:
def plot3D(x,t,y):
    x_plot =x.squeeze(1) 
    t_plot =t.squeeze(1)
    X,T= torch.meshgrid(x_plot,t_plot,indexing='ij')
    u_xt = y
    fig = plt.figure()
    ax=fig.subplots(1,1)
    cp = ax.contourf(T,X,u_xt,20,cmap=cm.rainbow) #)levels = np.linspace(-1.0,1.0,12))
    fig.colorbar(cp) # Add a colorbar to a plot
    ax.set_title('u')
    ax.set_xlabel('t')
    ax.set_ylabel('x')
    plt.show()
    ax = plt.axes(projection='3d')
    surf = ax.plot_surface(T.numpy(), X.numpy(), u_xt.numpy(),cmap=cm.rainbow, antialiased=False)
    ax.set_xlabel('t')
    ax.set_ylabel('x')
    ax.set_zlabel('u')
    #ax.set_zlim3d(-1, 1)
    plt.show()
In [4]:
#x∈[0,2]
x_min=0
x_max=2

#t∈[0,0.48]
t_min=0
t_max=0.48

viscosity = 0.01/np.pi # Given


#Discretization points for x and t

total_points_x=1001
total_points_t=1000

dx = (x_max-x_min)/(total_points_x-1)

dt = (t_max-t_min)/(total_points_t)


dx,dt, dt/dt
Out[4]:
(0.002, 0.00047999999999999996, 1.0)
In [5]:
#Implementing Finite Difference Method to solve the 1D Diffusion Equation

def u_fem(x,t):
    un = torch.ones(total_points_x)
    rec = torch.zeros([total_points_x, total_points_t])
    
    for j in tqdm_notebook(range(total_points_t)):
        
        un = u.clone()
        
        for i in range(1,total_points_x-1):
            rec[i,j] = u[i]
            u[i] = un[i] - un[i] * dt/dx * (un[i]-un[i-1])  + viscosity * (dt/dx**2) * (un[i+1]- 2*un[i] + un[i-1])
            if np.isnan(u[i]):
              print(i, j, u[i-1])
              break
    
    return u, rec
In [6]:
x = torch.linspace(x_min, x_max, total_points_x)
In [7]:
u = torch.from_numpy(np.sin(np.pi*x.numpy()))
In [8]:
u_plot = u.clone()
plt.figure(figsize=(11, 7), dpi= 100)
plt.plot(x, u_plot, lw=2)
plt.xlabel('x')
plt.ylabel('u')
plt.title('Burgers Equation at t=0');
In [9]:
# Creating same amount of grid lattice as FDM

x = torch.linspace(x_min, x_max, total_points_x).view(-1,1)
t = torch.linspace(t_min, t_max, total_points_t).view(-1,1)
x.shape, t.shape
Out[9]:
(torch.Size([1001, 1]), torch.Size([1000, 1]))
In [10]:
# Computing Finite Difference solution for u

print("Running Finite Difference Method...")
u_final, u_fem_2D = u_fem(x,t)

assert u_fem_2D.shape == torch.Size([total_points_x, total_points_t]),f"Expected [{total_points_x},{total_points_t}], got {u_fem_2D.shape}"
print("Completed successfully!")
Running Finite Difference Method...
Completed successfully!

Finite Difference Method - Solution

In [11]:
X, T = torch.meshgrid(x.squeeze(1),t.squeeze(1), indexing='ij')
X.shape, T.shape
Out[11]:
(torch.Size([1001, 1000]), torch.Size([1001, 1000]))
In [12]:
plt.figure(figsize=(11, 7), dpi= 100)
plt.plot(x, u_plot, label="Initial Condition, t=0")
plt.plot(x, u_final, 'r',label="FDM, t=0.48")
plt.xlim([x_min,x_max])
plt.ylim([-1, 1])
plt.legend()
plt.xlabel('x')
plt.ylabel('u')
plt.title('Burgers Equation with Finite Difference Method');
plt.show()
In [13]:
print("FDM Solution Visualization")
plot3D(x,t,u_fem_2D)
FDM Solution Visualization

Notice that as time approached 0.48, A shock wave id forming at the center. Due to limtations of the Finite difference method, aythong beyond t=0.48 will result in divergence. Shock wave is expected to form at t =0.5.

Physics Informed Neural Network

If we rearrange our PDE, we get:

$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial ^2u}{\partial x^2}=0$$

Neural Network

A Neural Network is a function approximator constructed as follows:

$$NN(X)=W_n\sigma_{n-1}(W_{n-1}\sigma_{n-2}(...(W_2(W_1X+b_1)+b_2)+..)+b_{n-1})+b_n$$

Note: We usually train our NN by iteratively minimizing a loss function ($MSE$:mean squared error) in the training dataset(known data).

PINNs = Neural Network + PDE

We can use a neural network to approximate any function (Universal APproximation Theorem): $$NN(x,t)\approx u(x,t)$$

Since NN is a function, we can obtain its derivatives: $\frac{\partial NN}{\partial t},\frac{\partial^2 NN}{\partial x^2}$.(Automatic Diferentiation)

Assume:$$NN(t,x)\approx u(t,x)$$

Then:

$$\frac{\partial NN}{\partial t}+NN\frac{\partial NN}{\partial x}-\nu\frac{\partial^2 NN}{\partial x^2}\approx \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} - \nu \frac{\partial ^2u}{\partial x^2}=0$$

And:

$$\frac{\partial NN}{\partial t}+NN\frac{\partial NN}{\partial x}-\nu\frac{\partial^2 NN}{\partial x^2}\approx 0$$

We define this function as $f$:

$$f(t,x)=\left( \frac{\partial NN}{\partial t}+NN\frac{\partial NN}{\partial x}-\nu\frac{\partial^2 NN}{\partial x^2}\right)\rightarrow residue$$

If $residue\rightarrow 0$ then our NN would be respecting the physical law. To minimize the $residue$ we will use L2 norm with matrix of zeros with same size as the $residue$

PINNs' Loss function

We evaluate our PDE in a certain number of "collocation points" ($N_f$) inside our domain $(x,t)$. Then we iteratively minimize a loss function related to $f$:

$$MSE_f=\frac{1}{N_f}\sum^{N_f}_{i=1}|f(t_f^i,x_f^i)|^2$$

Usually, the training data set is a set of points from which we know the answer. In our case, we will use our boundary(BC) and initial conditions(IC).

Initial Condition:

$$y(x,0)=sin(\pi x)$$

Boundary Conditions:

$$y(-1,t)=0$$$$y(1,t)=0$$

Since we know the outcome, we select $N_u$ points from our BC and IC and used them to train our network.

$$MSE_{u}=\frac{1}{N_u}\sum^{N_u}_{i=1}|y(t_{u}^i,x_u^i)-NN(t_{u}^i,x_u^i)|^2$$

Total Loss:

$$MSE=MSE_{u}+MSE_f$$
In [14]:
#PINNs
In [15]:
# Creating same amount of grid lattice as FDM
x = torch.linspace(x_min, x_max, total_points_x).view(-1,1)
t = torch.linspace(t_min, t_max, total_points_t).view(-1,1)
x.shape, t.shape
Out[15]:
(torch.Size([1001, 1]), torch.Size([1000, 1]))
In [16]:
X, T = torch.meshgrid(x.squeeze(1),t.squeeze(1), indexing='ij') #same as FDM
X.shape, T.shape
Out[16]:
(torch.Size([1001, 1000]), torch.Size([1001, 1000]))
In [17]:
left_X = torch.hstack((X[:,0][:,None], T[:,0][:,None])) #horizontal stacking to create X, T dataset
In [18]:
left_U = torch.sin(np.pi*left_X[:,0]).unsqueeze(1) #initial condition is a sine wave
left_U.shape
Out[18]:
torch.Size([1001, 1])
In [19]:
plt.figure(figsize=(11, 7), dpi= 100)
plt.plot(x, left_U, lw=2)
plt.xlabel('x')
plt.ylabel('u')
plt.title('Burgers Equation at t=0');
In [20]:
# BC at x_min
bottom_X = torch.hstack((X[0,:][:,None],T[0,:][:,None]))
top_X = torch.hstack((X[-1,:][:,None],T[-1,:][:,None]))

bottom_U = torch.zeros(bottom_X.shape[0],1)
top_U = torch.zeros(top_X.shape[0],1)

bottom_X.shape
Out[20]:
torch.Size([1000, 2])
In [21]:
X_bc = torch.vstack([bottom_X, top_X])
U_bc = torch.vstack([bottom_U, top_U])

X_bc.shape
Out[21]:
torch.Size([2000, 2])
In [22]:
N_ic = 1000
N_bc = 1000 #Number of points on IC and BC
N_pde = 30000 #Number of points on PDE domain (Collocation Points)

#Now we will sample N_bc points at random 
#from the X_train, U_train dataset

idx = np.random.choice(X_bc.shape[0],N_bc, replace=False)
X_bc_samples = X_bc[idx,:]
U_bc_samples = U_bc[idx,:]

idx = np.random.choice(left_X.shape[0],N_ic, replace=False)
X_ic_samples = left_X[idx,:]
U_ic_samples = left_U[idx,:]

#The boundary conditions will not change. 
#Hence, these U values can be used as supervised labels during training

#For PDE collocation points, we will generate new X_train_pde dataset
#We do not know U(X,T) for these points 

#Lets get the entire X,T dataset in a format suitable for Neural Network
#We will later use this for testing NN as well. So, lets call this x_test for convenience

x_test = torch.hstack((X.transpose(1,0).flatten()[:,None],
                       T.transpose(1,0).flatten()[:,None]))

#We need column major flattening to simlulte time-marching. Hence the transpose(1,0) or simply use .T

#we will use U generated from FEM as our u_test 
#We will use u_test later in the process for calculating NN performance 

u_test = u_fem_2D.transpose(1,0).flatten()[:,None]
x_test.shape
Out[22]:
torch.Size([1001000, 2])
In [23]:
x_test.shape
Out[23]:
torch.Size([1001000, 2])
In [24]:
#lower and upper bounds of x_test
lb = x_test[0]
ub = x_test[-1]
lb,ub
Out[24]:
(tensor([0., 0.]), tensor([2.0000, 0.4800]))
In [25]:
#Sampling (X,T) domain using LHS
lhs_samples = lhs(2,N_pde) 
#2 since there are 2 variables in X_train, [x,t]
lhs_samples.shape
Out[25]:
(30000, 2)
In [26]:
X_train_lhs = lb + (ub-lb)*lhs_samples
X_train_lhs.shape
Out[26]:
torch.Size([30000, 2])
In [27]:
plt.plot(pd.DataFrame(X_train_lhs)[0], "bo", markersize=.5)
plt.title("LHS Sampling of X [0,2]")
plt.show()
In [28]:
plt.plot(pd.DataFrame(X_train_lhs)[1], "ro", markersize=.5)
plt.title("LHS Sampling of T [0,0.48]")
plt.show()
In [29]:
X_train_final = torch.vstack((X_train_lhs, X_ic_samples, X_bc_samples))
X_train_final.shape
Out[29]:
torch.Size([32000, 2])
In [30]:
#Lets define a u_NN

class u_NN(nn.Module):
    
    def __init__(self, layers_list):
        
        super().__init__()

        self.depth = len(layers_list)

        self.loss_function = nn.MSELoss(reduction="mean")
    
        self.activation = nn.Tanh() #This is important, ReLU wont work

        self.linears = nn.ModuleList([nn.Linear(layers_list[i],layers_list[i+1]) for i in range(self.depth-1)])

        for i in range(self.depth-1):
      
          nn.init.xavier_normal_(self.linears[i].weight.data, gain=1.0) #xavier normalization of weights
      
          nn.init.zeros_(self.linears[i].bias.data) #all biases set to zero 

    def Convert(self, x): #helper function
        
        if torch.is_tensor(x) !=True:
            x = torch.from_numpy(x)
        return x.float().to(device)
  
    def forward(self, x):
        
        a = self.Convert(x)

        for i in range(self.depth-2):
            z = self.linears[i](a)
            a = self.activation(z)
        
        a = self.linears[-1](a)

        return a

    def loss_bc(self, x_bc, u_bc):
        #This is similar to a Supervised Learning

        l_bc = self.loss_function(self.forward(self.Convert(x_bc)), self.Convert(u_bc)) #L2 loss

        return l_bc

    def loss_ic(self, x_ic, u_ic):
        #This is similar to a Supervised Learning

        l_ic = self.loss_function(self.forward(self.Convert(x_ic)), self.Convert(u_ic)) #L2 loss

        return l_ic

    def loss_pde(self, x_pde):
        # We will pass x_train_final here. 
        # Note that we do not have U_pde (labels) here to calculate loss. This is not Supervised Learning. 
        # Here we want to minimize the residues. So, we will first calculate the residue and then minimize it to be close to zero.

        x_pde = self.Convert(x_pde)

        x_pde_clone = x_pde.clone() ##VERY IMPORTANT
    
        x_pde_clone.requires_grad = True #enable Auto Differentiation

        NN = self.forward(x_pde_clone) #Generates predictions from u_NN

        NNx_NNt = torch.autograd.grad(NN, x_pde_clone,self.Convert(torch.ones([x_pde_clone.shape[0],1])),retain_graph=True, create_graph=True)[0] #Jacobian of dx and dt

        NNxx_NNtt = torch.autograd.grad(NNx_NNt,x_pde_clone, self.Convert(torch.ones(x_pde_clone.shape)), create_graph=True)[0] #Jacobian of dx2, dt2

        NNxx = NNxx_NNtt[:,[0]] #Extract only dx2 terms

        NNt = NNx_NNt[:,[1]] #Extract only dt terms

        NNx = NNx_NNt[:,[0]] #Extract only dx terms
    
        # {(du/dt) = viscosity * (d2u/dx2)} is the pde and the NN residue will be {du_NN/dt - viscosity*(d2u_NN)/dx2} which is == {NNt - viscosity*NNxx}

        residue = NNt + self.forward(x_pde_clone)*(NNx) - (viscosity)*NNxx 

        # The residues need to be zero (or as low as possible). We'll create an arrazy of Zeros and minimize the residue 

        zeros = self.Convert(torch.zeros(residue.shape[0],1))

        l_pde = self.loss_function(residue, zeros) #L2 Loss

        return l_pde
    
    def total_loss(self, x_ic, u_ic, x_bc, u_bc, x_pde): #Combine both loss
        l_bc = self.loss_bc(x_bc, u_bc)
        l_ic = self.loss_ic(x_ic, u_ic)
        l_pde = self.loss_pde(x_pde)
        return l_bc + l_pde + l_ic #this HAS to be a scalar value for auto differentiation to do its thing.
In [31]:
#Parameters for u_NN

EPOCHS = 100000
initial_lr = 0.001
layers_list = [2, 32, 128, 16, 128, 32, 1]
#batch_size = 32

# Instantiate a model

PINN = u_NN(layers_list).to(device)
print(PINN)

optimizer = torch.optim.Adam(PINN.parameters(), lr=initial_lr,amsgrad=False)

scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.985)

history = pd.DataFrame(columns=["Epochs","Learning_Rate", "IC_Loss","BC_Loss","PDE_Loss","Total_Loss","Test_Loss"])
u_NN(
  (loss_function): MSELoss()
  (activation): Tanh()
  (linears): ModuleList(
    (0): Linear(in_features=2, out_features=32, bias=True)
    (1): Linear(in_features=32, out_features=128, bias=True)
    (2): Linear(in_features=128, out_features=16, bias=True)
    (3): Linear(in_features=16, out_features=128, bias=True)
    (4): Linear(in_features=128, out_features=32, bias=True)
    (5): Linear(in_features=32, out_features=1, bias=True)
  )
)
In [32]:
#****** Training ******#

print("Training Physics Informed Neural Network...")

Epoch = []
Learning_Rate = []
IC_Loss = []
BC_Loss = []
PDE_Loss = []
Total_Loss = []
Test_Loss = []

for i in tqdm_notebook(range(EPOCHS)):
    if i==0:
        print("Epoch \t Learning_Rate \t IC_Loss \t BC_Loss \t PDE_Loss \t Total_Loss \t Test_Loss")
        
    l_ic = PINN.loss_ic(X_ic_samples,U_ic_samples)
    l_bc = PINN.loss_bc(X_bc_samples,U_bc_samples)
    l_pde = PINN.loss_pde(X_train_final)
    loss = PINN.total_loss(X_ic_samples,U_ic_samples,X_bc_samples,U_bc_samples, X_train_final)
    
    optimizer.zero_grad()
    
    loss.backward()
    
    optimizer.step()
    
    if i%100 == 0: #print losses and step the exponential learning rate.
                
        with torch.no_grad():
            test_loss = PINN.loss_bc(x_test,u_test) #Here we are using loss_bc method as a helper function to calculate L2 loss
            
            Epoch.append(i)
            Learning_Rate.append(scheduler.get_last_lr()[0])
            IC_Loss.append(l_ic.detach().cpu().numpy())
            BC_Loss.append(l_bc.detach().cpu().numpy())
            PDE_Loss.append(l_pde.detach().cpu().numpy())
            Total_Loss.append(loss.detach().cpu().numpy())
            Test_Loss.append(test_loss.detach().cpu().numpy())

            if i%1000 ==0:
               print(i,'\t',format(scheduler.get_last_lr()[0],".4E"),'\t',format(l_ic.detach().cpu().numpy(),".4E"),'\t',format(l_bc.detach().cpu().numpy(),".4E"),'\t',
                  format(l_pde.detach().cpu().numpy(),".4E"),'\t',format(loss.detach().cpu().numpy(),".4E"),'\t',format(test_loss.detach().cpu().numpy(),".4E"))
 
        scheduler.step()

print("Completed!!")
Training Physics Informed Neural Network...
Epoch 	 Learning_Rate 	 IC_Loss 	 BC_Loss 	 PDE_Loss 	 Total_Loss 	 Test_Loss
0 	 1.0000E-03 	 5.7712E-01 	 2.9688E-02 	 3.4839E-02 	 6.4164E-01 	 4.7898E-01
1000 	 8.5973E-04 	 1.5992E-02 	 3.1669E-04 	 7.0765E-03 	 2.3385E-02 	 2.7861E-02
2000 	 7.3914E-04 	 9.6775E-03 	 2.9415E-04 	 3.7279E-03 	 1.3700E-02 	 1.8410E-02
3000 	 6.3546E-04 	 7.2524E-03 	 6.9606E-05 	 2.3851E-03 	 9.7071E-03 	 1.4186E-02
4000 	 5.4632E-04 	 4.9654E-03 	 3.5230E-05 	 2.1570E-03 	 7.1576E-03 	 1.0299E-02
5000 	 4.6969E-04 	 2.1240E-03 	 2.7083E-05 	 1.2403E-03 	 3.3914E-03 	 4.3880E-03
6000 	 4.0381E-04 	 7.3463E-04 	 1.7302E-06 	 1.0293E-03 	 1.7657E-03 	 1.4502E-03
7000 	 3.4717E-04 	 4.0344E-04 	 5.6225E-06 	 6.4979E-04 	 1.0588E-03 	 1.0144E-03
8000 	 2.9847E-04 	 3.0374E-04 	 1.6648E-06 	 4.8820E-04 	 7.9361E-04 	 8.7515E-04
9000 	 2.5660E-04 	 2.3008E-04 	 2.2152E-06 	 3.0860E-04 	 5.4090E-04 	 8.0031E-04
10000 	 2.2061E-04 	 1.8589E-04 	 3.4704E-06 	 2.5842E-04 	 4.4778E-04 	 7.6148E-04
11000 	 1.8966E-04 	 1.5873E-04 	 3.0393E-06 	 2.2766E-04 	 3.8943E-04 	 7.4176E-04
12000 	 1.6306E-04 	 1.3914E-04 	 2.5755E-06 	 2.0390E-04 	 3.4562E-04 	 7.3122E-04
13000 	 1.4019E-04 	 1.2549E-04 	 1.2634E-06 	 2.0048E-04 	 3.2723E-04 	 7.2850E-04
14000 	 1.2052E-04 	 1.1193E-04 	 2.2280E-06 	 1.6992E-04 	 2.8408E-04 	 7.2572E-04
15000 	 1.0362E-04 	 1.0237E-04 	 2.0385E-06 	 1.5752E-04 	 2.6193E-04 	 7.2827E-04
16000 	 8.9083E-05 	 9.4443E-05 	 1.9677E-06 	 1.4694E-04 	 2.4335E-04 	 7.3330E-04
17000 	 7.6588E-05 	 8.8070E-05 	 1.7882E-06 	 1.3797E-04 	 2.2783E-04 	 7.3979E-04
18000 	 6.5845E-05 	 8.2547E-05 	 1.8063E-06 	 1.3044E-04 	 2.1479E-04 	 7.4736E-04
19000 	 5.6609E-05 	 7.8319E-05 	 1.4159E-06 	 1.2460E-04 	 2.0434E-04 	 7.5431E-04
20000 	 4.8668E-05 	 7.3571E-05 	 1.8557E-06 	 1.1765E-04 	 1.9308E-04 	 7.6066E-04
21000 	 4.1842E-05 	 6.9800E-05 	 2.1831E-06 	 1.1789E-04 	 1.8988E-04 	 7.6832E-04
22000 	 3.5973E-05 	 6.6776E-05 	 1.3036E-06 	 1.0698E-04 	 1.7506E-04 	 7.7291E-04
23000 	 3.0927E-05 	 6.3573E-05 	 1.3804E-06 	 1.0231E-04 	 1.6726E-04 	 7.7840E-04
24000 	 2.6589E-05 	 6.0313E-05 	 2.0253E-06 	 1.0101E-04 	 1.6334E-04 	 7.8589E-04
25000 	 2.2859E-05 	 5.8075E-05 	 1.3085E-06 	 9.4235E-05 	 1.5362E-04 	 7.8828E-04
26000 	 1.9653E-05 	 5.5643E-05 	 1.2594E-06 	 9.0575E-05 	 1.4748E-04 	 7.9283E-04
27000 	 1.6896E-05 	 5.3377E-05 	 1.2187E-06 	 8.7201E-05 	 1.4180E-04 	 7.9710E-04
28000 	 1.4526E-05 	 5.1265E-05 	 1.2447E-06 	 8.4185E-05 	 1.3670E-04 	 8.0111E-04
29000 	 1.2488E-05 	 4.9330E-05 	 1.1704E-06 	 8.1166E-05 	 1.3167E-04 	 8.0480E-04
30000 	 1.0737E-05 	 4.7492E-05 	 1.1787E-06 	 7.8490E-05 	 1.2716E-04 	 8.0815E-04
31000 	 9.2306E-06 	 4.5855E-05 	 1.0515E-06 	 7.6056E-05 	 1.2296E-04 	 8.1130E-04
32000 	 7.9359E-06 	 4.4284E-05 	 1.0289E-06 	 7.3652E-05 	 1.1897E-04 	 8.1422E-04
33000 	 6.8227E-06 	 4.2774E-05 	 1.1971E-06 	 7.2671E-05 	 1.1664E-04 	 8.1735E-04
34000 	 5.8657E-06 	 4.1399E-05 	 1.0336E-06 	 6.9211E-05 	 1.1164E-04 	 8.1947E-04
35000 	 5.0429E-06 	 4.0106E-05 	 1.0172E-06 	 6.7251E-05 	 1.0837E-04 	 8.2184E-04
36000 	 4.3355E-06 	 3.8909E-05 	 9.7423E-07 	 6.5457E-05 	 1.0534E-04 	 8.2413E-04
37000 	 3.7274E-06 	 3.7776E-05 	 9.8380E-07 	 6.3678E-05 	 1.0244E-04 	 8.2614E-04
38000 	 3.2046E-06 	 3.6726E-05 	 9.6457E-07 	 6.2048E-05 	 9.9739E-05 	 8.2812E-04
39000 	 2.7551E-06 	 3.5723E-05 	 9.6818E-07 	 6.0503E-05 	 9.7194E-05 	 8.3000E-04
40000 	 2.3686E-06 	 3.4835E-05 	 8.9628E-07 	 5.9144E-05 	 9.4875E-05 	 8.3181E-04
41000 	 2.0364E-06 	 3.3931E-05 	 9.3546E-07 	 5.7702E-05 	 9.2569E-05 	 8.3347E-04
42000 	 1.7507E-06 	 3.3116E-05 	 9.2101E-07 	 5.6431E-05 	 9.0468E-05 	 8.3507E-04
43000 	 1.5051E-06 	 3.2354E-05 	 9.1510E-07 	 5.5236E-05 	 8.8505E-05 	 8.3664E-04
44000 	 1.2940E-06 	 3.1635E-05 	 9.1371E-07 	 5.4118E-05 	 8.6667E-05 	 8.3814E-04
45000 	 1.1125E-06 	 3.0968E-05 	 9.0248E-07 	 5.3067E-05 	 8.4937E-05 	 8.3964E-04
46000 	 9.5646E-07 	 3.0349E-05 	 8.9360E-07 	 5.2080E-05 	 8.3323E-05 	 8.4108E-04
47000 	 8.2230E-07 	 2.9759E-05 	 9.0175E-07 	 5.1157E-05 	 8.1818E-05 	 8.4247E-04
48000 	 7.0695E-07 	 2.9226E-05 	 8.8676E-07 	 5.0294E-05 	 8.0407E-05 	 8.4383E-04
49000 	 6.0779E-07 	 2.8727E-05 	 8.7881E-07 	 4.9492E-05 	 7.9098E-05 	 8.4517E-04
50000 	 5.2253E-07 	 2.8249E-05 	 8.8631E-07 	 4.8735E-05 	 7.7871E-05 	 8.4645E-04
51000 	 4.4924E-07 	 2.7815E-05 	 8.8472E-07 	 4.8034E-05 	 7.6735E-05 	 8.4772E-04
52000 	 3.8622E-07 	 2.7414E-05 	 8.8248E-07 	 4.7381E-05 	 7.5678E-05 	 8.4891E-04
53000 	 3.3205E-07 	 2.7042E-05 	 8.8226E-07 	 4.6775E-05 	 7.4700E-05 	 8.5009E-04
54000 	 2.8547E-07 	 2.6702E-05 	 8.8053E-07 	 4.6219E-05 	 7.3801E-05 	 8.5119E-04
55000 	 2.4543E-07 	 2.6390E-05 	 8.8195E-07 	 4.5700E-05 	 7.2972E-05 	 8.5225E-04
56000 	 2.1100E-07 	 2.6106E-05 	 8.8036E-07 	 4.5226E-05 	 7.2213E-05 	 8.5324E-04
57000 	 1.8141E-07 	 2.5846E-05 	 8.7891E-07 	 4.4793E-05 	 7.1518E-05 	 8.5420E-04
58000 	 1.5596E-07 	 2.5611E-05 	 8.7962E-07 	 4.4395E-05 	 7.0886E-05 	 8.5505E-04
59000 	 1.3408E-07 	 2.5398E-05 	 8.7941E-07 	 4.4038E-05 	 7.0315E-05 	 8.5586E-04
60000 	 1.1528E-07 	 2.5205E-05 	 8.8078E-07 	 4.3708E-05 	 6.9794E-05 	 8.5658E-04
61000 	 9.9106E-08 	 2.5033E-05 	 8.8024E-07 	 4.3416E-05 	 6.9328E-05 	 8.5722E-04
62000 	 8.5205E-08 	 2.4875E-05 	 8.8014E-07 	 4.3152E-05 	 6.8907E-05 	 8.5786E-04
63000 	 7.3253E-08 	 2.4735E-05 	 8.8045E-07 	 4.2918E-05 	 6.8533E-05 	 8.5840E-04
64000 	 6.2978E-08 	 2.4610E-05 	 8.8088E-07 	 4.2710E-05 	 6.8201E-05 	 8.5890E-04
65000 	 5.4144E-08 	 2.4500E-05 	 8.8101E-07 	 4.2519E-05 	 6.7900E-05 	 8.5933E-04
66000 	 4.6549E-08 	 2.4403E-05 	 8.8103E-07 	 4.2352E-05 	 6.7636E-05 	 8.5971E-04
67000 	 4.0020E-08 	 2.4320E-05 	 8.8113E-07 	 4.2207E-05 	 6.7408E-05 	 8.6004E-04
68000 	 3.4406E-08 	 2.4246E-05 	 8.8170E-07 	 4.2082E-05 	 6.7209E-05 	 8.6036E-04
69000 	 2.9580E-08 	 2.4180E-05 	 8.8157E-07 	 4.1970E-05 	 6.7032E-05 	 8.6064E-04
70000 	 2.5431E-08 	 2.4121E-05 	 8.8172E-07 	 4.1870E-05 	 6.6873E-05 	 8.6086E-04
71000 	 2.1864E-08 	 2.4070E-05 	 8.8172E-07 	 4.1782E-05 	 6.6734E-05 	 8.6102E-04
72000 	 1.8797E-08 	 2.4026E-05 	 8.8181E-07 	 4.1708E-05 	 6.6616E-05 	 8.6116E-04
73000 	 1.6160E-08 	 2.3988E-05 	 8.8166E-07 	 4.1644E-05 	 6.6514E-05 	 8.6125E-04
74000 	 1.3893E-08 	 2.3958E-05 	 8.8206E-07 	 4.1589E-05 	 6.6429E-05 	 8.6138E-04
75000 	 1.1945E-08 	 2.3933E-05 	 8.8238E-07 	 4.1541E-05 	 6.6356E-05 	 8.6148E-04
76000 	 1.0269E-08 	 2.3912E-05 	 8.8260E-07 	 4.1501E-05 	 6.6296E-05 	 8.6154E-04
77000 	 8.8287E-09 	 2.3893E-05 	 8.8287E-07 	 4.1465E-05 	 6.6241E-05 	 8.6160E-04
78000 	 7.5903E-09 	 2.3878E-05 	 8.8293E-07 	 4.1432E-05 	 6.6193E-05 	 8.6166E-04
79000 	 6.5256E-09 	 2.3869E-05 	 8.8336E-07 	 4.1415E-05 	 6.6168E-05 	 8.6169E-04
80000 	 5.6103E-09 	 2.3861E-05 	 8.8358E-07 	 4.1399E-05 	 6.6144E-05 	 8.6173E-04
81000 	 4.8233E-09 	 2.3854E-05 	 8.8376E-07 	 4.1387E-05 	 6.6125E-05 	 8.6177E-04
82000 	 4.1468E-09 	 2.3847E-05 	 8.8379E-07 	 4.1373E-05 	 6.6104E-05 	 8.6181E-04
83000 	 3.5651E-09 	 2.3843E-05 	 8.8376E-07 	 4.1365E-05 	 6.6092E-05 	 8.6181E-04
84000 	 3.0650E-09 	 2.3842E-05 	 8.8378E-07 	 4.1359E-05 	 6.6085E-05 	 8.6176E-04
85000 	 2.6351E-09 	 2.3840E-05 	 8.8367E-07 	 4.1355E-05 	 6.6079E-05 	 8.6176E-04
86000 	 2.2655E-09 	 2.3839E-05 	 8.8364E-07 	 4.1350E-05 	 6.6073E-05 	 8.6175E-04
87000 	 1.9477E-09 	 2.3838E-05 	 8.8354E-07 	 4.1346E-05 	 6.6068E-05 	 8.6177E-04
88000 	 1.6745E-09 	 2.3838E-05 	 8.8364E-07 	 4.1345E-05 	 6.6067E-05 	 8.6175E-04
89000 	 1.4396E-09 	 2.3837E-05 	 8.8362E-07 	 4.1344E-05 	 6.6065E-05 	 8.6172E-04
90000 	 1.2377E-09 	 2.3837E-05 	 8.8368E-07 	 4.1343E-05 	 6.6064E-05 	 8.6170E-04
91000 	 1.0641E-09 	 2.3837E-05 	 8.8361E-07 	 4.1340E-05 	 6.6060E-05 	 8.6169E-04
92000 	 9.1481E-10 	 2.3837E-05 	 8.8365E-07 	 4.1339E-05 	 6.6060E-05 	 8.6169E-04
93000 	 7.8649E-10 	 2.3837E-05 	 8.8365E-07 	 4.1339E-05 	 6.6060E-05 	 8.6168E-04
94000 	 6.7617E-10 	 2.3838E-05 	 8.8368E-07 	 4.1338E-05 	 6.6059E-05 	 8.6167E-04
95000 	 5.8132E-10 	 2.3838E-05 	 8.8374E-07 	 4.1336E-05 	 6.6058E-05 	 8.6166E-04
96000 	 4.9978E-10 	 2.3838E-05 	 8.8377E-07 	 4.1337E-05 	 6.6058E-05 	 8.6167E-04
97000 	 4.2968E-10 	 2.3838E-05 	 8.8382E-07 	 4.1336E-05 	 6.6058E-05 	 8.6167E-04
98000 	 3.6941E-10 	 2.3837E-05 	 8.8382E-07 	 4.1336E-05 	 6.6057E-05 	 8.6166E-04
99000 	 3.1759E-10 	 2.3837E-05 	 8.8383E-07 	 4.1337E-05 	 6.6058E-05 	 8.6166E-04
Completed!!

Neural Network - Solution

In [33]:
# Predictions from the NN

u_NN_predict = PINN(x_test)
In [34]:
#Reshapping y1 to be used in plot3d()

u_NN_2D = u_NN_predict.reshape(shape=[total_points_t,total_points_x]).transpose(1,0).detach().cpu()

assert u_NN_2D.shape == torch.Size([total_points_x, total_points_t]),f"Expected [{total_points_x},{total_points_t}], got {u_NN_2D.shape}"
In [35]:
RMSE = torch.sqrt(torch.mean(torch.square(torch.subtract(u_NN_2D,u_fem_2D))))

print("The RMSE error between FDM and PINN is :",np.around(RMSE.item(),5))
The RMSE error between FDM and PINN is : 0.02935
In [36]:
file_name_loss = "LOSS_CURVES_RMSE_"+str(np.around(RMSE.item(),5))+"_Epochs_"+str(EPOCHS)+"_lr_"+str(initial_lr)+".png"
fig, ax1 = plt.subplots(figsize=(11, 7), dpi= 100)
ax2 = ax1.twinx()
ax1.plot(Epoch, IC_Loss, "b-",label = "IC Loss")
ax1.plot(Epoch, BC_Loss, "g-",label = "BC Loss")
ax1.plot(Epoch, PDE_Loss, "c-",label = "PDE Loss")
ax1.plot(Epoch, Total_Loss, "k-",label = "Total Loss")
ax1.plot(Epoch, Test_Loss, "m-",label = "Test Loss")
ax2.plot(Epoch,Learning_Rate, "ro",markersize=1,label = "Learning Rate")
ax1.set_xlabel('Epochs')
ax1.set_ylabel('Losses', color='k')
ax2.set_ylabel('Learning Rate', color='k')
ax2.legend(loc=7)
ax1.legend(loc=1)
plt.title(file_name_loss)
#plt.show()
plt.savefig("/content/drive/MyDrive/Colab Notebooks/PINNs/Burgers/"+file_name_loss)

3D plot of solution for U based on Neural Network (PINN)

In [37]:
plot3D(x,t,u_NN_2D)

3D plot of solution for U based on Finite Difference Method computed earlier.

In [38]:
plot3D(x,t,u_fem_2D)

3D plot of Error between Finite Difference Method and Physics Informed Neural Network model.

In [39]:
plot3D(x,t,(u_NN_2D - u_fem_2D)) #Error
In [40]:
file_name_result = "Result_RMSE_"+str(np.around(RMSE.item(),5))+"_Epochs_"+str(EPOCHS)+"_lr_"+str(initial_lr)+".png"

last_U_NN = u_NN_2D[:,-1].unsqueeze(1) #Extracting the last U values at t=0.48

fig, ax1 = plt.subplots(figsize=(11, 7), dpi= 100)
ax2 = ax1.twinx()

ax1.plot(x, u_plot, "g",label="Initial Condition at t=0")
ax1.plot(x, u_final, "r",label="FDM at t=0.48", )
ax1.plot(x, last_U_NN.detach().cpu().numpy(), "bo",label="PINN Predict at t=0.48", markersize=1.5)
ax1.legend()
plt.title(file_name_result)
plt.savefig("/content/drive/MyDrive/Colab Notebooks/PINNs/Burgers/"+file_name_result)
In [41]:
#Saving the model

file_name_model = "Result_RMSE_"+str(np.around(RMSE.item(),5))+"_Epochs_"+str(EPOCHS)+"_lr_"+str(initial_lr)+".pth"
torch.save(PINN, "/content/drive/MyDrive/Colab Notebooks/PINNs/Burgers/"+file_name_model)

Conclusion

The PINNs perform exceptionally well and are much more stable then Finite Difference Method especially at the shock region around t=0.48. There is some "error" between the FDM and PINNs at the shock region. Further assessment and investigation is necessary.

Final Results

Final Results

References

[1] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2017). Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561. http://arxiv.org/pdf/1711.10561v1

[2] https://www.iist.ac.in/sites/default/files/people/IN08026/Burgers_equation_viscous.pdf

[3] https://en.wikipedia.org/wiki/Burgers%27_equation

[4] Rackauckas Chris, Introduction to Scientific Machine Learning through Physics-Informed Neural Networks. https://book.sciml.ai/notes/03/

[5] https://github.com/jdtoscano94/Learning-Python-Physics-Informed-Machine-Learning-PINNs-DeepONets

[6] Kashefi, Ali and Tapan Mukerji. “Physics-informed PointNet: A deep learning solver for steady-state incompressible flows and thermal fields on multiple sets of irregular geometries.” J. Comput. Phys. 468 (2022): 111510.


In [ ]:
####################   ANIMATION   ########################
In [43]:
#Imports for animation and display within a jupyter notebook
from matplotlib import animation, rc 
from IPython.display import HTML

#Generating the figure that will contain the animation
fig, ax = plt.subplots()
fig.set_dpi(100)
fig.set_size_inches(6, 5)
ax.set_xlim(( x_min, x_max))
ax.set_ylim((-1.2, 1.2))
PINN, = ax.plot([], [], "bo", markersize=3,label='Neural Network')
FDM, = ax.plot([], [], "r", lw=2,label='Finite Difference')
ax.legend();
plt.xlabel('x')
plt.ylabel('u')
plt.title('1D Burgers Equation Time Evolution from t=0 to t=0.48');
In [44]:
#Initialization function for funcanimation
def init():
    FDM.set_data([], [])
    PINN.set_data([], [])
    return (PINN, FDM)
In [45]:
#Main animation function, each frame represents a time step in our calculation
def animate(j):
    
    unn = u_NN_2D[:,j].unsqueeze(1).detach().cpu().numpy()

    ufdm = u_fem_2D[:,j].unsqueeze(1).detach().cpu().numpy()
    
    PINN.set_data(x, unn)
    FDM.set_data(x, ufdm)
    
    return (PINN, FDM,)
In [46]:
anim = animation.FuncAnimation(fig, animate, init_func=init,
                               frames=total_points_t, interval=100)
anim.save('/content/drive/MyDrive/Colab Notebooks/PINNs/Burgers/1dBurgers_06102022_1300.gif',animation.PillowWriter(fps=60))
#HTML(anim.to_jshtml())

Final Results

In [ ]:
#jupyter nbconvert --to html <filename.ipynb>