Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Wednesday, November 13, 2024

C# .NET client to publish 5 million events to Universal Messaging channel\IS Publishable document

In this blog article I have created a C# .NET client using Universal Messaging C# .NET API. This is in contrast to my previous two articles where I used Java and Message builders(as described in IBM official documentation) and Dynamic Builders to compose a Protobuf event and publish large amount of data to UM Channel\IS Publishable document. I took on the effort to do the same thing in C# to see if there will be a performance difference or if there is a different way to send messages to UM Channel more efficiently and to my surprise the performance metrics are so different between both implementations. I have described more in detail about the performance differences with same UM and IS setup in detail in the blog. Generate the C# code from the proto file. Below command is for running it from the same folder where the proto file is located.

This will generate a new cs file called SalesPdt.cs as show below.




At this point we have the auto generated csharp file generaed by the protoc compiler that we can import in to our existing solution and generate a nProtobufevent from the data and then loop over each generated event and publish to UM.

I have listed out a step-by-step instructions on how to implement a C# .NET program to generate a proto buf event and publish the event to UM. You can download the pdf document here.

For any questions or inquireies feel free to reach out to aarremreddy@silverspringtech.com

Program.cs










Sales.cs



UMUtilities.cs



Performance Metrics:
A first glance at the results show that it took less than half the time to publish 5 million records to UM from C# when compared with Java but if you drill down more in to the details it shows that the way we serialize and deserialize the messages is a lot different when using Descriptors\Builders from the library in comparison to using the auto generated code from protoc compiler. Having said that, I have not run a detailed performance test between Java and C# API’s to call out if one has a better performance than the other. One thing can be assured is that it was a lot easier for me to write the code in C# using the generated class instead of creating the Java code using the Descriptors and builders mentioned in the IBM documentation.

Wednesday, February 6, 2013

List files and directories recursively

This java service will list all the files and the files in the sub-directories with full path names.


IDataCursor pipelineCursor = pipeline.getCursor();

String    dirPath = IDataUtil.getString( pipelineCursor, "dirPath" );

pipelineCursor.destroy();

         

List<String> output=new ArrayList<String>();

         

List<String> tempPlaceHolder=new ArrayList<String>();

output=extract(dirPath,tempPlaceHolder);

  

IDataUtil.put( pipelineCursor, "dirList", output.toArray(new String[output.size()]));

pipelineCursor.destroy();
Shared Code:
public static List<String> extract(String directoryName, List<String> outFiles){

    
File directory=new File(directoryName);
File[] fileList=directory.listFiles();

String outFileName;
for(int i=0;i<fileList.length;i++){
 if(fileList[i].isFile()){
 outFileName=fileList[i].toString();
 outFiles.add(outFileName);
        }
 else{

      outFileName=fileList[i].toString();
      outFiles.add(outFileName);
      extract(outFileName,outFiles);               

     }           
    }

return outFiles;
}

Tuesday, February 5, 2013

Java service to create a Tar file using Apache Commons Compress library.

There is often a need to create a Tar file by archiving logs files or other files. You could create a tar file using Native Java libraries as well as third party libraries. The advantage of using third party libraries is that it  provides out of the box methods to write the files. The disadvantage would be to go through the license agreements on the jars and make sure you are not violating any of their agreements.


IDataCursor pipelineCursor = pipeline.getCursor();
String directoryName = IDataUtil.getString( pipelineCursor, "directoryName" );
pipelineCursor.destroy();
  
File dirName=new File(directoryName);
File[] fileNames=dirName.listFiles();  
OutputStream tarfileOutstream;  
try {
      tarfileOutstream = new FileOutputStream(new File("E:/aarremreddy/Tasks/Compress/test.tar"));
      ArchiveOutputStream aos = new ArchiveStreamFactory().createArchiveOutputStream(ArchiveStreamFactory.TAR, tarfileOutstream);
     
      for(int i=0;i<fileNames.length;i++){
 File tarInputfile= fileNames[i];
 TarArchiveEntry archiveEntry = new TarArchiveEntry(tarInputfile);
 archiveEntry.setSize(tarInputfile.length());
 aos.putArchiveEntry(archiveEntry);
 IOUtils.copy(new FileInputStream(tarInputfile),aos);
 aos.closeArchiveEntry();
      }

    aos.finish(); 
    tarfileOutstream.close();
    IDataUtil.put(pipelineCursor, "result", "Success");
} 
catch (FileNotFoundException e) {
 IDataUtil.put(pipelineCursor, "result", e.toString());
} catch (ArchiveException e) {
 IDataUtil.put(pipelineCursor, "result", e.toString());
} catch (IOException e) {
 IDataUtil.put(pipelineCursor, "result", e.toString());
}
Imports:
import org.apache.commons.compress.archivers.ArchiveException;
import org.apache.commons.compress.archivers.ArchiveOutputStream;
import org.apache.commons.compress.archivers.ArchiveStreamFactory;
import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
import org.apache.commons.compress.utils.IOUtils;
Jars:
commons-compress-1.4.1.jar;
commons-io-2.4.jar;

Tuesday, December 11, 2012

Consuming webMethods WSD in C#.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;


namespace TANFWorkSearch
{
    public partial class _Default : System.Web.UI.Page
    {
        protected void Page_Load(object sender, EventArgs e)
        {
            var ws = new TANFWorkSearchWSD.TANFWorkSearch();


            TANFWorkSearchWSD.WorkSearchOut wsInput = new TANFWorkSearchWSD.WorkSearchOut();


            DateTime dateTime = DateTime.Now;
            TANFWorkSearchWSD.SGData sgData = new TANFWorkSearchWSD.SGData();
            sgData.RequestEnvironment = TANFWorkSearchWSD.RequestEnvironment.C;
            sgData.RequestTimeStamp = dateTime;
            sgData.Caller = TANFWorkSearchWSD.Caller.C;
            sgData.ServiceVersion = "serviceVersionPlaceHolder";


            TANFWorkSearchWSD.BusinessData businessData = new TANFWorkSearchWSD.BusinessData();


            TANFWorkSearchWSD.Individual individual = new TANFWorkSearchWSD.Individual();
            double recID=12.30;
            individual.RecipientID = recID.ToString();
            individual.FirstName = "firstName";
            individual.LastName = "lastName";
            individual.MiddleInitial = "Middle";
            individual.DOB =dateTime;
            individual.SSN = "1234";
            individual.BeginDate = dateTime;
            individual.EndDate = dateTime;
            individual.UserID = "1234";
            individual.ParticipantID = "1234";


            ws.GetWorkSearchInfo(wsInput);


        }
    }
}

Search String in text file and return line number.

Inputs:
searchString
fileName

outputs:
exists
lineNum
error

Code:
IDataCursor pipelineCursor = pipeline.getCursor();
        String    searchString = IDataUtil.getString( pipelineCursor, "searchString" );
        String    fileName = IDataUtil.getString( pipelineCursor, "fileName" );
        pipelineCursor.destroy();
        
        File file=new File(fileName);    
        int lineNum = 0;
        
        try {
            Scanner in = new Scanner(new FileReader(file));
            
            while (in.hasNext()) {
            String inString = in.nextLine().toLowerCase();
            if( inString.indexOf( searchString ) >= 0) 
            {             
                IDataUtil.put( pipelineCursor, "lineNum", Integer.toString(lineNum));
                IDataUtil.put( pipelineCursor, "exists", "true");
                break;
            }
            else{
                IDataUtil.put( pipelineCursor, "exists", "false");
            }
            lineNum++;
            }            
        } catch (FileNotFoundException e) {        
            IDataUtil.put( pipelineCursor, "error", e.toString());
        }
        pipelineCursor.destroy();
Imports:

Code:
import java.io.FileReader;
import java.util.Scanner;
import java.io.File;

Split one large file to multiple small files based on break length - Large file handling.

Inputs:

fileName: Absolute File Name path to be split.
lineBreak: number of lines when the break needs to happen.
outFileNamePrefix: output file name prefix, absolute path.

Code:


IDataCursor pipelineCursor = pipeline.getCursor();
  String fileName = IDataUtil.getString( pipelineCursor, "fileName" );
  String lineBreak = IDataUtil.getString( pipelineCursor, "lineBreak" );
  String outFileNamePrefix = IDataUtil.getString( pipelineCursor, "outFileNamePrefix" );
  pipelineCursor.destroy();  
  
  int breakLenInt=Integer.parseInt(lineBreak);
  int count=0;
  int currentLineNumber=0;
  String line=null;
  
  FileInputStream inputStream;
  FileWriter fstream;
  
  try {
   inputStream = new FileInputStream(new File(fileName));
   InputStreamReader streamReader = new InputStreamReader(inputStream, "UTF-8");
   BufferedReader reader = new BufferedReader(streamReader);
   while ((line=reader.readLine())!=null){
    if(currentLineNumber<breakLenInt){
     String outFile=outFileNamePrefix+"_"+count+".txt";
  
     fstream = new FileWriter(outFile,true);
     BufferedWriter bw=new BufferedWriter(fstream);
  
     bw.write(line);
     bw.newLine();
     bw.close();
     
     currentLineNumber++;
    }
    else{
     breakLenInt=breakLenInt+Integer.parseInt(lineBreak);
     count++;
    }
   }
   reader.close();
   streamReader.close();
   IDataUtil.put( pipelineCursor, "result", "Success" );
  } catch (FileNotFoundException e) {
   IDataUtil.put( pipelineCursor, "result", e.toString() );
  } catch (UnsupportedEncodingException e) {
   IDataUtil.put( pipelineCursor, "result", e.toString() );
  } catch (IOException e) {
   IDataUtil.put( pipelineCursor, "result", e.toString() );
  }

Friday, April 20, 2012

Convert an XML Node in pipeline to String.

When dealing with content handlers the data will come in to the pipeline as an XML node object $contentStream. This code will help convert that xml node object to string and stream outputs.

Input: node, datatype: object
Output: outString, datatype: String & outStream, datatype: object

Import the below classes.


import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.io.StringWriter;
import java.io.UnsupportedEncodingException;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import org.w3c.dom.Node;


Code:


IDataCursor pipelineCursor = pipeline.getCursor();
        Node node = (Node) IDataUtil.get( pipelineCursor, "node" );
        pipelineCursor.destroy();
        
        StringWriter sw = new StringWriter();
            try {
                
              Transformer t = TransformerFactory.newInstance().newTransformer();
              t.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
              t.setOutputProperty(OutputKeys.INDENT, "yes");
              t.transform(new DOMSource(node), new StreamResult(sw));
              // convert to string debug...........
              String x=sw.toString();
              IDataUtil.put( pipelineCursor, "outStr", x);
             
              InputStream outStream = new ByteArrayInputStream(x.getBytes("UTF-8"));
              IDataUtil.put( pipelineCursor, "outStream", outStream);
              
            } catch (TransformerException e) {              
              IDataUtil.put( pipelineCursor, "outStr", e);
          } catch (UnsupportedEncodingException e) {
              IDataUtil.put( pipelineCursor, "outStr", e);
            }


Monday, January 16, 2012

Consuming WSDL when schema's are not reachable

In this post i will discuss how to consume a WSDL which has references to remote schema's through URL references.

Most of the times the development boxes are not in the DMZ of the network because of various security reasons and this will make consuming third party vendor WSDL's hosted outside of the intranet a bit painful in webMethods. In order to over come this problem we can edit the WSDL locally by modifying the URL/URI locations to either include the XSD's in the WSDL or to give a file reference to the locally downloaded locations.

Steps:

1. Download the WSDL and the XSD's from the remote location from a box where you can access them.
2. Now copy all these files to the target package resources directory.
Ex ..\integrationserver\pacakgeName\resources
3. Open the WSDL in eclipse or an XML editor and remove the URL references to the XSD's to a file location using the "file://"

ex:
before the change:


  <xsd:import namespace="http://remote.service.functionality.pogeneration.google.com/" schemaLocation="POGenerationWebService_schema1.xsd" />


After the change:
 <xsd:import namespace="http://remote.service.functionality.pogeneration.google.com/" schemaLocation="file:///D:/SoftwareAG/IntegrationServer/packages/packageName/resources/POGenerationWebService_schema1.xsd"/>

or

 <xsd:import namespace="http://remote.service.functionality.pogeneration.google.com/" schemaLocation="file:///../POGenerationWebService_schema1.xsd"/>



4. Repeat for all the XSD references and validate the WSDL in eclipse.
5. If the WSDL is valid after the changes then go to Developer and consume the wsdl.

The reason for them being put in the resources directory is to make sure that all the artifacts are bundled in the same package and can be ported during the deployment.

Cons:

  • Maintaining a new version of the WSDL than the one from the client.
  • Maintaining the schemas up to date from the client.


MTOM implementation

In this post i will be discussing how to send email with MTOM attachments using webMethods services.

Steps:

1. Create a Flow service senderSvc which will get an image file and then convert it to a base64 encoded string. This is our harness service replicating the Data feeder.
2. Create a Flow service service B with the inputs fileName (String) and data (string with the base64Binary contentType set). The output is a success (string) flag. This will be the receiver service.
3. Create a producer WSD receiverWSD for the receiverSvc and set the property attachment enabeld to true.
4. Consume the receiver WSD and make a call to the connector in senderSvc.
5. post the data from senderSvc to be received by receiverSvc and then send to an email using the pub.client:smtp service.

senderSvc:

receiverSvc:



Sample email:




Friday, January 13, 2012

Setting up Designer

In this tutorial you will learn how to setup Designer for developement.

1. Click on the Window ->preferences->SoftwareAG->IntegrationServers.




2. Click Add and then add the IS that you want to connect to for the development.

3. click on verify server and if the test is successful then the configuration is complete.