76. What is heap memory?
Objects storage space for objects references created at run time in a jvm is heap memory. Initial Heap Size: 50 MB .Maximum Heap Size: 256 MB
Out of memory exception is there, how to handle that exception?
To increase heap memory size.
77. If we give heap size value same for both min and max then what are the advantages and what are the disadvantages?
The Java heap parameters influence the behavior of garbage collection. Increasing the heap size supports more object creation. Because a large heap takes longer to fill, the application runs longer before a garbage collection occurs. However, a larger heap also takes longer to compact and causes garbage collection to take longer.
The JVM has thresholds it uses to manage the JVM's storage. When the thresholds are reached, the garbage collector gets invoked to free up unused storage. Therefore, garbage collection can cause significant degradation of Java performance. Before changing the initial and maximum heap sizes, you should consider the following information:
In the majority of cases you should set the maximum JVM heap size to value higher than the initial JVM heap size. This allows for the JVM to operate efficiently during normal, steady state periods within the confines of the initial heap but also to operate effectively during periods of high transaction volume by expanding the heap up to the maximum JVM heap size.
In some rare cases where absolute optimal performance is required you might want to specify the same value for both the initial and maximum heap size. This will eliminate some overhead that occurs when the JVM needs to expand or contract the size of the JVM heap. Make sure the region is large enough to hold the specified JVM heap.
Beware of making the Initial Heap Size too large. While a large heap size initially improves performance by delaying garbage collection, a large heap size ultimately affects response time when garbage collection eventually kicks in because the collection process takes more time.
78. What is FFDC?
The first failure data capture (FFDC) log file saves information that is generated from a processing failure. These files are deleted after a maximum number of days have passed.
There are two artifacts which are produced by FFDC, the information can be located in the <Install Root>/logs/FFDC directory:
* Exception Logs :< Server Name>_Exception. log
* Incident Stream:<ServerName>_<threadid>_<timeStamp>_<SequenceNumber>.txt
Exception Log
row elements
The exception logs contains all of the exception paths which have been encountered since the server has started. Due to optimizations in the data collection, the table was created to give an over view of the exceptions which have been encountered in the server. A entry in the table look like this :
Index Occur Time of last Occurrence Exception SourceId ProbeId
ences
-----------------------------------------------------------------------
1 1 02.04.11 13:12:33:711 CDT java.io.IOException com.ibm.ws.webcontainer.http.HttpTransport.startTransport 103
The first element in the row is a simply index, this is simply used to determine the number of rows in the table. In some entries, a '+' may appear in the first column, this indicates that the row has been added to the table since the last time the entire table was dumped.
The second element is the number of occurrences. This is useful to see if there is an unusual number of exceptions which are occurring.
The third element in the row is a time stamp for the last occurrence of the exception. This is useful in looking at exceptions which have occurred at about the same time.
The last element in the row is a combination of values. This consists of the exception name, a source Id and the probeId. This information is useful to locate information in the incident steam about the specific failure.
File content
the make up of the file can be a little confusing when first viewed. The file is a accumulation of all of the dumps which have occurred over the life of the server. This means that much of the information in the file is out of data, and does not apply to the current server. The most relevant information is the last (tail) of the file.
It is quite easy to locate the last dump of the exception table. The dump will be delaminated by '-------------------...'. Entries which begin with a '+' appear outside the delimitation of the table, and indicate that they are additions to the table since the last time the table was dumped. (Again due to performance concerns, the table is dump only periodically, and when the server is stopping).
79. Here is a screen image of the end of the Server1_Exception.log?
The information in the above file is displayed in the unordered form as the hash table. A more viewable form of the file would be to actually sort the output based upon the time stamp. (This is done by using mks commands, hopefully there are available on your system).
Sorted output of only the last dump of the exception table for Server1_Exception.log. This is done by the following command:
tail -n<n> <server name>_exception. log | sort -k4n
where n is the number exceptions in the exception table plus 1 (use the index value to determine this value).
<Server name> is the name of the server.
Note: The sort key needs a little work for servers which have rolled the data.
For demonstration purposes, the start, run and stop time have been included in the exception log..
Incident Stream
The incident stream contains more details about exceptions which have been encountered during the running of the server. Depending on the configuration of the property files, the content of the incident streams will vary.
The default settings of the property files, the incident stream will not contain exception information for exceptions which were encountered during the start of the server (due to the Level=1 in the ffdcStart.properties). But where the server does to ready, and new exception which is encountered will be processed.
The incident stream files should be used in conjunction of the exception log. The values which are contained in the exception log, in most instances will have a corresponding entry in the incident stream. The relationship between the exception log and the incident stream is the hash code which is made up of the exception type, the source Id, and the probeId. The simplest way to look at this information is to use the grep command. The information is not all contained on the same line, if you need to know the exact file containing the value; you can use a compound grep command.
File content
the file contains information on exception which has been encountered. Each exception will contain information which corresponds to the information (exception name, source Id and the probe Id) contained in the exception table (documented above). If the catch of the exception is a non-static method, the content of the pointer. In some instances, if there is a diagnostic module which corresponds to the current execution, the DM will write the information about the state of the object to the incident stream.
The call stack will also be written to the incident stream.
In some instances, there may be an exception which was encountered while the server is running which will not produce a call stack. This is because the exception was encountered during the start of the server, and since the server started, the exception is considered to be a normal path exception. All of the exception can be seen by either looking at all of the runtime exceptions, or looking at all of the exceptions.
* Exception Logs :< Server Name>_Exception. log
* Incident Stream:<ServerName>_<threadid>_<timeStamp>_<SequenceNumber>.txt
Exception Log
row elements
The exception logs contains all of the exception paths which have been encountered since the server has started. Due to optimizations in the data collection, the table was created to give an over view of the exceptions which have been encountered in the server. A entry in the table look like this :
Index Occur Time of last Occurrence Exception SourceId ProbeId
ences
-----------------------------------------------------------------------
1 1 02.04.11 13:12:33:711 CDT java.io.IOException com.ibm.ws.webcontainer.http.HttpTransport.startTransport 103
The first element in the row is a simply index, this is simply used to determine the number of rows in the table. In some entries, a '+' may appear in the first column, this indicates that the row has been added to the table since the last time the entire table was dumped.
The second element is the number of occurrences. This is useful to see if there is an unusual number of exceptions which are occurring.
The third element in the row is a time stamp for the last occurrence of the exception. This is useful in looking at exceptions which have occurred at about the same time.
The last element in the row is a combination of values. This consists of the exception name, a source Id and the probe
File content
the make up of the file can be a little confusing when first viewed. The file is a accumulation of all of the dumps which have occurred over the life of the server. This means that much of the information in the file is out of data, and does not apply to the current server. The most relevant information is the last (tail) of the file.
It is quite easy to locate the last dump of the exception table. The dump will be delaminated by '-------------------...'. Entries which begin with a '+' appear outside the delimitation of the table, and indicate that they are additions to the table since the last time the table was dumped. (Again due to performance concerns, the table is dump only periodically, and when the server is stopping).
79. Here is a screen image of the end of the Server1_Exception.log?
The information in the above file is displayed in the unordered form as the hash table. A more viewable form of the file would be to actually sort the output based upon the time stamp. (This is done by using mks commands, hopefully there are available on your system).
Sorted output of only the last dump of the exception table for Server1_Exception.log. This is done by the following command:
tail -n<n> <server name>_exception. log | sort -k4n
where n is the number exceptions in the exception table plus 1 (use the index value to determine this value).
<Server name> is the name of the server.
Note: The sort key needs a little work for servers which have rolled the data.
For demonstration purposes, the start, run and stop time have been included in the exception log..
Incident Stream
The incident stream contains more details about exceptions which have been encountered during the running of the server. Depending on the configuration of the property files, the content of the incident streams will vary.
The default settings of the property files, the incident stream will not contain exception information for exceptions which were encountered during the start of the server (due to the Level=1 in the ffdcStart.properties). But where the server does to ready, and new exception which is encountered will be processed.
The incident stream files should be used in conjunction of the exception log. The values which are contained in the exception log, in most instances will have a corresponding entry in the incident stream. The relationship between the exception log and the incident stream is the hash code which is made up of the exception type, the source Id, and the probe
File content
the file contains information on exception which has been encountered. Each exception will contain information which corresponds to the information (exception name, source Id and the probe Id) contained in the exception table (documented above). If the catch of the exception is a non-static method, the content of the pointer. In some instances, if there is a diagnostic module which corresponds to the current execution, the DM will write the information about the state of the object to the incident stream.
The call stack will also be written to the incident stream.
In some instances, there may be an exception which was encountered while the server is running which will not produce a call stack. This is because the exception was encountered during the start of the server, and since the server started, the exception is considered to be a normal path exception. All of the exception can be seen by either looking at all of the runtime exceptions, or looking at all of the exceptions.
80. How many SSL Certificate authorities available in today’s market?
There might be many SSL CAs. Some of the SSL CAs are
Etrust
Verisign
Geotrust
RSA etc.
81. Tell about class loader and where we use?
Class loaders enable the Java Virtual Machine (JVM) to load java classes. Given the name of a class, the class loader locates the definition of this class. Each java class must be loaded by a class loader.
There are three class loaders:
Bootstrap class loader
The Extensions class loader
The application class loader
Default class loader option is Parent first class loader.
82. How many certifications are available in the WAS?
a)Application Servers: Distributed Application and Web Servers | ||||
Test 377, IBM WebSphere Application Server, Network Deployment, V7.0, Core Administration | I | May 2009 | ||
Business Integration: Application Integration and Connectivity | ||||
Test 378, IBM WebSphere DataPower SOA Appliances Firmware V3.7.x | I | June 2009 | ||
Test 374, IBM WebSphere MQ V7.0, System Administration | I | July 2009 | ||
Test 376, IBM WebSphere MQ V7.0, Solution Design | I | August 2009 | ||
Business Integration: Dynamic Business Process Management | ||||
Test 372, IBM WebSphere Business Modeler Advanced V6.2, Business Analysis and Design | I | July 2009 | ||
Test 375, IBM WebSphere Process Server V6.2, System Administration | I | October 2009 | ||
Commerce: Web Commerce | ||||
None in plan. | ||||
Software Development: Web Services | ||||
Test 371, Web Services Development for IBM WebSphere Application Server V7.0 | I | August 2009 | ||
* | E = entry; I = intermediate; A = advanced |
83. What is the command to create profile ?
Manageprofile create -ProfileName <profile_name> -Profilepath <Profile_path>
-NodeName <Node_Name> -templatePath <Templete_path> -cellName <Cell_Name> -hostName <Host_Name>
List Profile:
Manageprofile –listprofiles
Delete Profile:
Manageprofile –delete –profilename <profile_name>
84. How many ways we can deploy the application? and What is the command to deploy application?
It depends on the version of the WAS we are using, but 5X and above provide the following options.
Using Admin Console:
In admin console
Provide the required parameters like full path, context root, etc.
Hot Deployment :
“We could copy directly the JAR files to the deployedapps folder in the websphere “ we call this method as Hot Deployment
Dropping JSP files, with enabled class reloading (Not recommended for Production)
Using Wsadmin command:
Using Jacl or Jython Scripts:
Rapid Deployment (Feature available at 6x):
WebSphere rapid deployment (WRD) simplifies the development and deployment of application. It's capabilities include annotation-based programming, deployment automation, and change-triggered process. To use WD functionality, no changes are required on the application server. It uses existing application server administration function to deploy and control applications.
Annotation-based programming allows the developer to add metadata tags into application source code. WRD uses the metadata to generate additional J2EE artifacts needed to run the application on the application server environment.
Change trigger processing provided automatic monitoring of changes to the WRD user workspace. Changes trigger the automatic generation of code and deployment of the application to the application server.
85. What is authentication mechanism in JDBC driver?
In JDBC driver configuration we can configure the authentication details in J2C authentication pan. This is the credentials to login into the Relational database.
How u will secure your administrative console, if u r using local O/S users registry u r getting messages like not able to authenticate what u will do? What is the solution?
There might be the privileges issue to the user in O/S level. So we need to give proper privileges to the user by logging in as System administrator.
86. What is the difference between WAR, EAR, JAR and what is the difference between deployments of these?
In J2EE application modules are packaged as EAR, JAR and WAR based on their functionality
JAR: EJB modules which contains enterprise java beans class files and EJB deployment descriptor are packed as JAR files with .jar extension
WAR :Web modules which contains Servlet class files,JSP Files, supporting files, GIF and HTML files are packaged as JAR file with .war( web archive) extension
EAR :All above files(.jar and .war) are packaged as JAR file with .ear ( enterprise archive) extension and deployed into Application Server.
JAR: EJB modules which contains enterprise java beans class files and EJB deployment descriptor are packed as JAR files with .jar extension
WAR :Web modules which contains Servlet class files,JSP Files, supporting files, GIF and HTML files are packaged as JAR file with .war( web archive) extension
EAR :All above files(.jar and .war) are packaged as JAR file with .ear ( enterprise archive) extension and deployed into Application Server.
There is no much difference in deploying these applications. We need to give context root for WAR and for others no need to give.
EAR deployment:
If we have two or more modules then we can target individual modules to individual servers.
87. How you will solve if u get page can’t displayed?
It is a HTTP 404 error. If you get this error we need to check the logs for application server status. The page expecting by the request is not finding that means request is reaching the server but it is not available at expected location.
88. what is cluster, how request routes between cluster members?
The algorithm which we select for load balance will route the requests. There are two algorithms
Round robin
Random
89. Can you give me two major issues you faced and solved?
The application was having error with SSL , shows bad certificate on the application right cornor..So customer requested for root cause for GSK_ERROR_BAD_CERT .
I investigated...... like certificate mismatch between Plug-in and the WebSphere..
1. I found in the WAS console that, the default personal certs in the node level of WAS in not reflected in the web servers. Which was added in the DMGR?
Steps i followed to resolve this are:
1. I noted down the personal certificates serial no from the nodes by navigating to
Security --> SSL certificate and key management--> Manage endpoint security configurations -->Inbound--> expand cell-->Node--> Key stores and certificates -->NodeDefaultKeyStore-->Personal Certificates
Noted down the serial number of the default certificate then
--- >Extracted the certificate to Server temp path.
Come to inbound/outbound---> expand cell-->node-->web server--> Key stores and certificates-->CMSkeystore -->signer certificates--> verify the serial no of the previous nodes certs....
I found one of the cert is not appear here in Web server.
i Added the same from here.. as i already enabled Dynamically update the runtime when changes occur" option.. it should update without restart...
Then i came to Plugin-key.kdb to verify whether the added cert is updated in the KDB or not.. using ikeyman.
Reference:
http://www-01.ibm.com/support/docview.wss?rs=180&uid=swg21264477
http://www-01.ibm.com/support/docview.wss?uid=swg21198862
a copy of WebSphere Application Server V6.1 or V7.0 (or another related product) is present in the specified directory, even when the ODM VPD is clean.
While uninstalling the older version of WAS(5.x) to upgrade it to 6. We uninstalled and but it was not uninstalled clearly....
We tried to remove the registry with Smitty tool... after that also we are not able to install as it says the path contains the WAS already installed.
So we contacted the WAS product support from IBM raised PMR.. where we got some resolutions to clear the ODM......
Then they suggested to try with
manual_WebSphere_ODM_wipe.sh
manual_IHS_ODM_wipe.sh
After that we followed the same with suggested steps and we succeeded finally we upgraded to 6x.
90. What is rollout update in was6.1?
Automatic roll out of application update in a clustered environment
Ensures no service interruption of the application. Stops, updates and starts the application one cluster member at a time, while the other cluster members continue to run the application





