Failed to read auto-increment value from storage engine

One after another – This was a strange exception I received today. I am not sure if this is a bug of JPA or underlying MySQL.

SQL Error: 1467, SQLState: HY000

Failed to read auto-increment value from storage engine

Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.orm.jpa.JpaSystemException: could not execute statement; nested exception is org.hibernate.exception.GenericJDBCException: could not execute statement] with root cause

java.sql.SQLException: Failed to read auto-increment value from storage engine

I was pretty much surprised, what else!

Server version: 10.1.36-MariaDB

Spring Boot :: (v2.1.7.RELEASE)

I tried this.

ALTER TABLE `feed_item_info` AUTO_INCREMENT =1

It does not help me.

Later –

ALTER TABLE `feed_item_info` ORDER BY `index`

It throws the following warning.

Warning: #1105 ORDER BY ignored as there is a user-defined clustered index in the table ‘feed_item_info’

But it was working.


I shot this recently, at the seashore which borders Singapore and Malaysia

I shot this recently, at the seashore which borders Singapore and Malaysia

java.lang.Exception: Incorrect string value: ‘\xE0\xAE\xB5\xE0\xAF\x87…’

Hi Hadoopers,

This is a nasty exception which kicked off my reducer task, which updates my MySQL table with the reducer output.

The reason behind this is unicode character.

MySQL table was created with non-unicode wester encoding. I’m trying to insert multi lingual unicode text. After changing the table collation (if needed field collation also) to utf8_bin, it worked fine.

alter table FeedEntryRecord convert to character set utf8 collate utf8_bin;


Lab 14: Sending MapReduce output to JDBC

Hi Hadoopers,

Unfortunately I couldn’t post on time, as I’ve been hit with flu. Here is the post for today. Let’s see how to send the output of Reducer to JDBC in this post. I’ll take Lab 08 – MapReduce using custom class as Key post and modify it.


We have no change in Mapper. It will accept the long and Text object as input and emit the custom key EntryCategory and IntWritable Output.


Reducer will accept the output of mapper as its input, EntryCategory as key and IntWritable as value. It will emit custom key DBOutputWritable as key and NullWritable as output.

package org.grassfield.hadoop;

import java.util.Date;

import org.apache.hadoop.mapreduce.Reducer;
import org.grassfield.hadoop.entity.DBOutputWritable;
import org.grassfield.hadoop.entity.EntryCategory;

 * Reducer for Feed Category reducer
 * @author pandian
public class FeedCategoryReducer extends 
    Reducer<EntryCategory, IntWritable, DBOutputWritable, NullWritable> {

    protected void reduce(EntryCategory key, Iterable<IntWritable> values, Context context) {
        int sum=0;
        for (IntWritable value:values){
        DBOutputWritable db = new DBOutputWritable();
        db.setParseDate(new java.sql.Date(new Date().getTime()));
        try {
            context.write(db, NullWritable.get());
        } catch (IOException | InterruptedException e) {
            System.err.println("Error while updating record in database");



Our bean DBOutputWritable should implement Writable and DBWritable interfaces so that we shall update the database.

package org.grassfield.hadoop.entity;

import java.sql.Date;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;

import org.apache.hadoop.mapreduce.lib.db.DBWritable;

 * Bean for table feed_analytics
 * @author pandian
public class DBOutputWritable implements Writable, DBWritable {
    private Date parseDate;
    private String category;
    private int count;

    public Date getParseDate() {
        return parseDate;

    public void setParseDate(Date parseDate) {
        this.parseDate = parseDate;

    public String getCategory() {
        return category;

    public void setCategory(String category) {
        this.category = category;

    public int getCount() {
        return count;

    public void setCount(int count) {
        this.count = count;

    public void readFields(ResultSet arg0) throws SQLException {
        throw new RuntimeException("not implemented");

    public void write(PreparedStatement ps) throws SQLException {
        ps.setDate(1, this.parseDate);
        ps.setString(2, this.category);
        ps.setInt(3, this.count);

    public void readFields(DataInput arg0) throws IOException {
        throw new RuntimeException("not implemented");


    public void write(DataOutput arg0) throws IOException {
        throw new RuntimeException("not implemented");


Driver is where I’ll be specifying my database details. Note the changes in output key classes and values

package org.grassfield.hadoop;


import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.util.GenericOptionsParser;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.grassfield.hadoop.entity.DBOutputWritable;
import org.grassfield.hadoop.entity.EntryCategory;

 * A Mapper Driver Program to count the categories in RSS XML file This may not
 * be the right approach to parse the XML. Only for demo purpose
 * @author pandian
public class FeedCategoryCountDriver extends Configured
        implements Tool {

    public int run(String[] args) throws ClassNotFoundException, IOException, InterruptedException {
        Configuration conf = getConf();
        GenericOptionsParser parser = new GenericOptionsParser(conf,
        args = parser.getRemainingArgs();

        Path input = new Path(args[0]);

        Job job = new Job(conf, "Feed Category Count");


        try {
            FileInputFormat.setInputPaths(job, input);
                    "feed_category", //table name
                    new String[]{"parseDate", "category", "count"}    //fields
        } catch (IOException e) {

        return job.waitForCompletion(true)?0:1;

    public static void main(String[] args) throws Exception {
        System.exit( Configuration(),
                new FeedCategoryCountDriver(), args));

Add the mysql driver to Maven dependencies. If you don’t use Maven, use the library as external jar dependency.


****Copy the jar to HadoopHome/lib/native and HadoopHome/share/hadoop/mapreduce/lib/***

Restart hadoop deamons.

Table Structure & DB setup

Let’s create our table first.


CREATE TABLE `feed_category` (
`id` bigint(20) NOT NULL,
`category` varchar(100) COLLATE utf8_bin NOT NULL,
`count` int(11) NOT NULL

ALTER TABLE `feed_category`

ALTER TABLE `feed_category`


Let’s execute now.

hadoop@gandhari:/opt/hadoop-2.6.4/jars$ hadoop jar FeedCategoryCount-14.jar org.grassfield.hadoop.FeedCategoryCountDriver /user/hadoop/feed/2016-09-24

16/09/24 08:35:46 INFO mapreduce.Job: Counters: 40
        File System Counters
                FILE: Number of bytes read=128167
                FILE: Number of bytes written=1162256
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1948800
                HDFS: Number of bytes written=0
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=0
        Map-Reduce Framework
                Map input records=1107
                Map output records=623
                Map output bytes=19536
                Map output materialized bytes=4279
                Input split bytes=113
                Combine input records=623
                Combine output records=150
                Reduce input groups=150
                Reduce shuffle bytes=4279
                Reduce input records=150
                Reduce output records=150
                Spilled Records=300
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=0
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0
                Total committed heap usage (bytes)=1885339648
        Shuffle Errors
        File Input Format Counters
                Bytes Read=487200
        File Output Format Counters
                Bytes Written=0

So, is my table populated?


yes it is.

com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

Have a good weekend guys. Let me take some rest before moving to MRUnit.

org.hibernate.AssertionFailure: null id in entry (don’t flush the Session after an exception occurs)

I’m  inserting multiple records to MySQL with Hibernate 5.

After a constraint failure, all the records are failed to get inserted with the error ‘org.hibernate.AssertionFailure: null id in entry (don’t flush the Session after an exception occurs)’. To get rid of this problem, I cleared the hibernate session when the exception occurs with session.clear().

Data is getting pumped without any problem now.

Cd1bUvDUUAEo8M4.jpg large

Spring 4 MVC and JDBC authenticated Spring Security – 100% java annotation based configuration

I feel pleasure to introduce you another post on this spring 4 development environment series. So far we have done the following steps.

This post would be an extension to my earlier post Spring 4 MVC and Spring Security 100% java annotation based configuration. We have used in-memory authentication, which is suitable for a beginner. But my next project has already a database designed and I need to authenticate against it. So here would be the jdbc based authentication. Please add MySQL and Commons DBCP in your maven dependencies.

Modify WebSecurityConfig

The in-memory configuration has been defined in in our earlier example. Lets modify it slightly to handle the authentication with JDBC. Pls look at the method public void configAuthentication(AuthenticationManagerBuilder auth).

package org.grassfield.conf;

import org.apache.commons.dbcp2.BasicDataSource;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;

public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
    private DataSource datasource;*/

    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests().antMatchers("/", "/home").permitAll()

    public void configAuthentication(AuthenticationManagerBuilder auth)
            throws Exception {
        BasicDataSource ds = new BasicDataSource();
                "select u.user_name, u.password, true from user_info u, user_role r where and u.user_name=?")
                "select u.user_name, from user_info u, user_role r where and u.user_name=?");

Here are the screenshots. jdbc01 jdbc02

spring-tool-suite-project-logo java8-logo image00110 Apache-Tomcat-logo

Way to Struts 1.2 DataSource


Today I had downloaded struts 1.2.7 and tried to write an application. (I know I am selecting a bit old.) I configured the datasource in struts. Alas. When I tried to start my server, tomcat 6.0, I got the error saying "java.lang.ClassNotFoundException: org.apache.commons.dbcp.BasicDataSource". With Googling I found struts-legacy.jar need to downloaded and copied to lib folder of Tomcat. (I couldnt get the recent version of struts-legacy. I found something in archive). Happily restarted the server. I saw java.lang.NoClassDefFoundError: org/apache/commons/pool/impl/GenericObjectPool.. oof.. again I need to download commons-dbcp-1.2.2.jar and commons-pool.jar.

My search come to an end 🙂 It is working fine now.

com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure


Today I had a new install of Mandriva 2008 linux. I wrote a java code to test the mysql connectivity. It ended with the following exception.

com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
Last packet sent to the server was 0 ms ago.

Alas, deeper google search gives me a solution that told about the mysql connectivity parameter in /etc/my.cnf


This parameter has been added for some security related reasons. Really I dont know what it is. I just removed that line, which solved this issue.

How to access MS Access database from JDBC?

btw, I dint test it. I just post it for my reference. thanks Angsuman Chakraborty.

How to access MS Access database from JDBC?

 private static final String accessDBURLPrefix = "jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=";   
private static final String accessDBURLSuffix = ";
// Initialize the JdbcOdbc Bridge Driver
static {
try {
} catch(ClassNotFoundException e)
System.err.println("JdbcOdbc Bridge Driver not found!");
/** Creates a Connection to a Access Database */
public static Connection getAccessDBConnection(String filename) throws SQLException
filename = filename.replace('\', '/').trim();
String databaseURL = accessDBURLPrefix + filename + accessDBURLSuffix;
// System.err.println("Datebase URL: " + databaseURL);
return DriverManager.getConnection(databaseURL, "", "");