Wednesday 30 August 2017

2)Sqoop Tools

Sqoop Tools:
Sqoop is a collection of related tools. To use Sqoop, we should specify the tool want to use and the arguments that control the tool.

Syntax:
sqoop tool-name [tool-arguments]
To display a list of all available sqoop tools:
Command:
sqoop help

Note:
Goodness is we can display the help for a specific tool by entering:
sqoop help (tool-name) or sqoop tool-name) --help

Example:
sqoop help import or sqoop import --help.


To perform the operation of each Sqoop tool:

mano@Mano:~$ sqoop import --help

usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]

So, the tool-arguments are divided into two categories
  1. [GENERIC-ARGS}
  2. [TOOL-ARGS]

1.[GENERIC-ARGS]==> These are controls the configuration and Hadoop server settings

Note:
comes after the tool name but before any tool-specific arguments (such as --connect,--username etc.,).
generic arguments are preceeded by a single dash character (-),

2.[TOOL-ARGS]==>These are related to sqoop tool which we use

Note:
tool-specific arguments start with two dashes (--), unless they are single character arguments such as -P.


Can be done through Options Files to Pass Arguments:

Step 1: Create a file on local storage area, which arguments options to pass for sqoop

Step 2:
Pass the options file using --options_file argument

Syntax:
sqoop --options-file file_path;

Example:
I have used to inspect the database using eval tool

options_file.txt
#
# Options file for Sqoop import
#

eval

--connect
jdbc:mysql://localhost/sqoop_test

--username
root

--password
root

Execution:
Output:


Please follow the link for further ==>Sqoop_Page 3

Sunday 27 August 2017

About Big Data

Before to start exploration on Big data, let's think about how the big data comes, in picture??

The below are the reasons behind the big data comes in picture:
  1. Evolution of technology
  2. IOT(Internet Of Things)
  3. Social Media
  4. Other factors
Let's see briefly:
 
1)Evolution of technology:


Earlier we had land line phones, But nowadays,we have android,IOS smart phones, to make our life smarter. so just think, for each  operation which we perform on smart phones, generates a data, that resides somewhere

Desktops are the source to handle operations, i mean to store and process using storage devices like floppy,discs,taps,..etc.,

But in these days, Hard disks,cloud storage plays a vital role.

Earlier , we are in the hand of Analog storage, but these days almost of Digital storage. and also about the evolution of car, self driving car,




2)IOT(Internet Of Things):

IOT connects physical device to Internet and makes device smarter.

Example:
Smart TV's, Smart Ac's, Smart Car's etc.,



3)Social Media:
Data generation on social media sites,
  • Facebook likes,videos,photos,tags,comments etc.,
  • Tweeter tweets,
  • Youtube video uploads
  • Instagram pics,
  • Emails

4)Other Factors:

  • Retail
  • Banking & Finance,
  • Media & Entertainment
  • Health care,
  • Education areas,
  • Government,
  • Transportation, Insurance etc.,

Note: Assumption by 2020, 50 billions IOT devices will in the world.

Big Data:

Big data is a term for data sets that are so large or complex or even huge or massive volume of both structured and unstructured data that traditional data processing application software is inadequate to deal with big data or difficult to process.

Note: Big Data is not a technology, it's paradigm(pattern) shift.


To determine which data is considered as Big Data,  we have some
Characteristics of big data:(5V's of Big Data):


1)V- Volume:
Amount of data being generating and generated.



2)V- Variety:
Different kinds of data , that is being generated from various sources.



Types of data:
  1. Structured data - Tables
  2. Semi-structured data - CSV,JSON,EMAILS,TSV,XML
  3. Unstructured data - Videos, images, Logs, Audio files
3)V- Velocity:
The speed at which the data is being generated and processed to meet the demands.
Data is being generated at alarming rate.


4)V- Value:
Mechanism to bring the correct meaning out of huge data.

5)V- Veracity:
Uncertainty and inconsistencies in the data, i.e., The quality of captured data can vary greatly, affecting accurate analysis.


Problems with Big Data:
Problem 1:Storing exponentially growing large data sets in a non-distributed system.

Problem 2:Processing variety of data i.e., complex structure data.

Problem 3:Processing data faster

To put a solution for those above problems , Hadoop comes and plays a vital role.

Solutions with Hadoop:

Problem 1:Storing exponentially growing large data sets in a non-distributed system.

Solution: HDFS
  • It is storage part of Hadoop
  • Distributed File system,
  • Divides files into smaller chunks and stores across the cluster.
  • Scalable as per requirement(Scalability)

Problem 2:Storing varies of data.

Solution: HDFS
  • HDFS allows to store any kind of data,(Structured,semi-structured or unstructured)
  • No schema validation in HDFS while dumping data
  • Follows WORM (Write once Ream Many)


Problem 3:
Processing data faster

Solution: MapReduce
  • Parallel execution of data present in HDFS
  • Allows to process the data locally, i.e., each node responsible for data processing which stored on it.
Big data as an opportunity to bring below:



Big data use cases:
Below are some of the Big data use cases from different domains:
  •  Improve Customer Experience
  •  Sentiment analysis
  •  Customer Churn analysis
  •  Predictive analysis
  •  Real-time ad matching and serving
                                                                                                                                                  Next_Page

Friday 25 August 2017

1)SQOOP, LIST and EVAL

Apache Sqoop:
Apache Sqoop(TM) is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases.

To run and check sqoop options:


Start by listing first:



1)Command to list databases on mysql databases
Command:
list-databases     List available databases on a server
Syntax:
sqoop list-databases --connect hostaddress --username name --password pass ;

mano@Mano:~$ sqoop list-databases \
> --connect jdbc:mysql://localhost \
> --username root \
> --password root \
> ;

Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
17/08/25 15:02:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/08/25 15:02:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/08/25 15:02:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
metastore
mysql
performance_schema
sqoop_test
sqoopdb

2)Command to list tables on mysql databases
Command:
list-tables        List available tables in a database
Syntax:
sqoop list-tables --connect hostaddress --username name --password pass ;


mano@Mano:~$ sqoop list-tables --connect jdbc:mysql://localhost/sqoop_test --username root --password root ;
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
17/08/25 15:02:56 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/08/25 15:02:56 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/08/25 15:02:56 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
employees
mano@Mano:~$

Note :
Difference between list-databases and list-tables
for list-databases - don't need to pass the db name on connection string
Example:
--connect jdbc:mysql://localhost
for list-tables - we must pass the appropriate DB name on connection string
Example:
--connect jdbc:mysql://localhost/sqoop_test

3)Command to evaluate tables on mysql databases

Command:
eval               Evaluate a SQL statement and display the results

Syntax:
sqoop eval --connect hostaddress --username user --password pass --query 'query to run' ;

mano@Mano:~$ sqoop eval \
> --connect jdbc:mysql://localhost/sqoop_test \
> --username root \
> --password root \
> --query 'select * from employees'

Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/mano/Hadoop_setup/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
;
17/08/25 15:23:54 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/08/25 15:23:54 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/08/25 15:23:54 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
-----------------------------------------
| id          | name       | city       |
-----------------------------------------
| 1           | Mano       | Chennai    |
| 2           | Prasath    | Chennai    |
| 3           | Chella     | osure      |
-----------------------------------------

Monday 14 August 2017

New_5)HiveQL Keywords ==>Non-reserved Keywords and Reserved Keywords

HiveQL Keywords, Non-reserved Keywords and Reserved Keywords



All Keywords

Version

Non-reserved Keywords

Reserved Keywords

Hive 1.2.0
ADD, ADMIN, AFTER, ANALYZE, ARCHIVE, ASC, BEFORE, BUCKET, BUCKETS, CASCADE, CHANGE, CLUSTER, CLUSTERED, CLUSTERSTATUS, COLLECTION, COLUMNS, COMMENT, COMPACT, COMPACTIONS, COMPUTE, CONCATENATE, CONTINUE, DATA, DATABASES, DATETIME, DAY, DBPROPERTIES, DEFERRED, DEFINED, DELIMITED, DEPENDENCY, DESC, DIRECTORIES, DIRECTORY, DISABLE, DISTRIBUTE, ELEM_TYPE, ENABLE, ESCAPED, EXCLUSIVE, EXPLAIN, EXPORT, FIELDS, FILE, FILEFORMAT, FIRST, FORMAT, FORMATTED, FUNCTIONS, HOLD_DDLTIME, HOUR, IDXPROPERTIES, IGNORE, INDEX, INDEXES, INPATH, INPUTDRIVER, INPUTFORMAT, ITEMS, JAR, KEYS, KEY_TYPE, LIMIT, LINES, LOAD, LOCATION, LOCK, LOCKS, LOGICAL, LONG, MAPJOIN, MATERIALIZED, METADATA, MINUS, MINUTE, MONTH, MSCK, NOSCAN, NO_DROP, OFFLINE, OPTION, OUTPUTDRIVER, OUTPUTFORMAT, OVERWRITE, OWNER, PARTITIONED, PARTITIONS, PLUS, PRETTY, PRINCIPALS, PROTECTION, PURGE, READ, READONLY, REBUILD, RECORDREADER, RECORDWRITER, REGEXP, RELOAD, RENAME, REPAIR, REPLACE, REPLICATION, RESTRICT, REWRITE, RLIKE, ROLE, ROLES, SCHEMA, SCHEMAS, SECOND, SEMI, SERDE, SERDEPROPERTIES, SERVER, SETS, SHARED, SHOW, SHOW_DATABASE, SKEWED, SORT, SORTED, SSL, STATISTICS, STORED, STREAMTABLE, STRING, STRUCT, TABLES, TBLPROPERTIES, TEMPORARY, TERMINATED, TINYINT, TOUCH, TRANSACTIONS, UNARCHIVE, UNDO, UNIONTYPE, UNLOCK, UNSET, UNSIGNED, URI, USE, UTC, UTCTIMESTAMP, VALUE_TYPE, VIEW, WHILE, YEAR
ALL, ALTER, AND, ARRAY, AS, AUTHORIZATION, BETWEEN, BIGINT, BINARY, BOOLEAN, BOTH, BY, CASE, CAST, CHAR, COLUMN, CONF, CREATE, CROSS, CUBE, CURRENT, CURRENT_DATE, CURRENT_TIMESTAMP, CURSOR, DATABASE, DATE, DECIMAL, DELETE, DESCRIBE, DISTINCT, DOUBLE, DROP, ELSE, END, EXCHANGE, EXISTS, EXTENDED, EXTERNAL, FALSE, FETCH, FLOAT, FOLLOWING, FOR, FROM, FULL, FUNCTION, GRANT, GROUP, GROUPING, HAVING, IF, IMPORT, IN, INNER, INSERT, INT, INTERSECT, INTERVAL, INTO, IS, JOIN, LATERAL, LEFT, LESS, LIKE, LOCAL, MACRO, MAP, MORE, NONE, NOT, NULL, OF, ON, OR, ORDER, OUT, OUTER, OVER, PARTIALSCAN, PARTITION, PERCENT, PRECEDING, PRESERVE, PROCEDURE, RANGE, READS, REDUCE, REVOKE, RIGHT, ROLLUP, ROW, ROWS, SELECT, SET, SMALLINT, TABLE, TABLESAMPLE, THEN, TIMESTAMP, TO, TRANSFORM, TRIGGER, TRUE, TRUNCATE, UNBOUNDED, UNION, UNIQUEJOIN, UPDATE, USER, USING, UTC_TMESTAMP, VALUES, VARCHAR, WHEN, WHERE, WINDOW, WITH
Hive 2.0.0
removed: REGEXP, RLIKE
added: AUTOCOMMIT, ISOLATION, LEVEL, OFFSET, SNAPSHOT,TRANSACTION, WORK, WRITE
added: COMMIT, ONLY, REGEXP, RLIKE, ROLLBACK, START
Hive 2.1.0
added: ABORT, KEY, LAST, NORELY, NOVALIDATE, NULLS, RELY, VALIDATE
added: CACHE, CONSTRAINT, FOREIGN, PRIMARY, REFERENCES
Hive 2.2.0
added: DETAIL, DOW, EXPRESSION, OPERATOR, QUARTER, SUMMARY, VECTORIZATION, WEEK, YEARS, MONTHS, WEEKS, DAYS, HOURS, MINUTES, SECONDS
added: DAYOFWEEK, EXTRACT, FLOOR, INTEGER, PRECISION, VIEWS
Hive 3.0.0
added: TIMESTAMPTZ, ZONE 
added: TIME, NUMERIC


There are two ways if the user still would like to use those reserved keywords as identifiers:

1) use quoted identifiers, 
2) set hive.support.sql11.reserved.keywords=false.

New_4)Hive Interactive Shell Commands:

4)Hive Interactive Shell Commands:

When $HIVE_HOME/bin/hive is run without either the -e or -f option, it enters interactive shell mode.

Note:Use ";" (semicolon) to terminate commands

Function
Hive
Run script inside shell
source file_name
Run ls (dfs) commands
dfs –ls /user
Run ls (bash command) from shell
!ls
Set configuration variables
set mapred.reduce.tasks=32
TAB auto completion
set hive.<TAB>
Show all variables starting with hive
set
Revert all variables
reset
Add jar to distributed cache
add jar jar_path
Show all jars in distributed cache
list jars
Delete jar from distributed cache
delete jar jar_name


Examples:

hive> set mapred.reduce.tasks=32;
hive> set;
hive> select student1.* from student1;
hive> !ls;
hive> dfs -ls;

TAB auto completion :

hive (default)> set hive.cli.print.
hive.cli.print.current.db   hive.cli.print.header


New_3)Hive Logging information

3)Hive Logging information:

  • Hive uses log4j for logging. These logs are not emitted to the standard output by default but are
  • instead captured to a log file specified by Hive's log4j properties file. 
  • By default Hive will use hive-log4j.default in the conf/ directory of the Hive installation which writes out logs to /tmp/<userid>/hive.log and uses the WARN level.
  • It is often desirable to emit the logs to the standard output and/or change the logging level for debugging purposes. 

These can be done from the command line as follows:

$HIVE_HOME/bin/hive --hiveconf hive.root.logger=INFO,console

Log Directory:

Property hive.log.dir= <Directy_path>

Logging options:

Property hive.root.logger=INFO,console

3)Hive Commands,CLI(Command Line Interface)

Hive commands:

Commands are useful for setting a property or adding a resource
Commands Table chart:

Command Descripiton
quit or exit Use quit or exit to leave the interactive shell.
reset Resets the configuration to the default values
set <key>=<value> Sets the value of a particular configuration variable (key)
set Prints a list of configuration variables
set -v Prints all Hadoop and Hive configuration variables.
add FILE[S] <filepath> <filepath>*
add JAR[S] <filepath> <filepath>*
add ARCHIVE[S] <filepath> <filepath>*
Adds one or more files, jars, or archives to the list of resources in the distributed cache
add FILE[S] <ivyurl> <ivyurl>*
add JAR[S] <ivyurl> <ivyurl>*
add ARCHIVE[S]<ivyurl> <ivyurl>*
adds one or more files, jars or archives to the list of resources in the distributed cache using an Ivy URL of the form ivy://group:module:version?query_string
list FILE[S]
list JAR[S]
list ARCHIVE[S]
Lists the resources already added to the distributed cache.
list FILE[S] <filepath>*
list JAR[S] <filepath>*
list ARCHIVE[S] <filepath>*
Checks whether the given resources are already added to the distributed cache or not.
delete FILE[S] <filepath>*
delete JAR[S] <filepath>*
delete ARCHIVE[S] <filepath>*
Removes the resource(s) from the distributed cache.
delete FILE[S] <ivyurl> <ivyurl>*
delete JAR[S] <ivyurl> <ivyurl>*
delete ARCHIVE[S] <ivyurl> <ivyurl>*
Removes the resource(s) which were added using the <ivyurl> from the distributed cache.
! <command> Executes a shell command from the Hive shell.
dfs <dfs command> Executes a dfs command from the Hive shell.
<query string> Executes a Hive query and prints results to standard output.
source FILE <filepath> Executes a script file inside the CLI.
compile `<groovy string>` AS GROOVY NAMED <name> This allows inline Groovy code to be compiled and be used as a UDF

Hive CLI(Command Line Interface)

$HIVE_HOME/bin/hive is a shell utility which can be used to run Hive queries in either interactive or batch mode.

Hive Command Line Options:
Syntax:usage on command line: just hit hive


Examples:
1)Running a query from the command line with -e <quoted-query-string>  

mano@Mano:~$ hive -e 'select * from students'
 
2)Setting Hive configuration variables --hiveconf

mano@Mano:~$ hive -e 'select * from students' --hiveconf hive.cli.print.header=false


Note: The variant "-hiveconf" is supported as well as "--hiveconf".

3)Dumping data out from a query into a file 

mano@Mano:~$ hive -e 'select * from students'> /home/mano/DataSets/students.txt



using silent mode -S,--silent  


4)Running a script from local disk using -f <filepath> 

hive -f <filepath>
<filepath> can be from one of the Hadoop supported filesystems (HDFS, S3, etc.) as well

hive -f /home/my/hive-script.sql

hadoop@Manohar:~$ hive -f hdfs://localhost:9000/Hive_Script/hdfs_script.sql

5)Running an initialization script before entering interactive mode

hive -i /home/my/hive-init.sql

  Previous page               Next page

1)Apache Hive:

Apache Hive:

The Apache Hive is a data warehouse software that is built on top of Apache Hadoop for data analysis., that facilitates reading, writing, and managing large data sets residing in distributed storage(HDFS).

Note:

Hive provides a mechanism to impose structure for a variety of data formats on Hadoop and to query that data using a SQL-like language called HiveQL (HQL).


Hive was originated in Facebook.

Apache Hive provides the following features:
  • Hive tools to enable easy access to data via SQL interface, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis.
  • A mechanism to impose structure on a variety of data formats
  • Hive access to files stored either directly in Apache HDFS™ or in other data storage systems such as Apache HBase™ 
  • Hive Query execution via Apache Tez™, Apache Spark™, or MapReduce(Default)
Limitations of Hive:

Hive is not designed for Online transaction processing (OLTP ), it is only used for the Online Analytical Processing.

Hive supports overwriting data, but not updates and deletes.

Hive is used inspite of Pig?
  • Hive-QL is a declarative language like SQL, PigLatin is a data flow language.
  • Pig: a data-flow language and environment for exploring very large datasets.
  • Hive: a distributed data warehouse.
Components of Hive:

1)HCatalog:

HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools — Pig, MapReduce — to more easily read and write data on the grid.

2)WebHCat:

WebHCat provides a service that you can use to run Hadoop MapReduce (or YARN), Pig, Hive jobs or perform Hive metadata operations using an HTTP (REST style) interface.

Hive Execution engines and properties:


There are currently three execution engines , following are

 1.Defualt MapReduce engine,
  hive.execution.engine=mr
 2.TEZ engine,
  set hive.execution.engine=tez;
 3.Spark engine
  set hive.execution.engine=spark;
Please click next to proceed further ==> Next page

Wednesday 9 August 2017

Java Annotations

Java Annotations:

Description:
Java Annotation is a tag that represents the metadata to indicate some additional information which can be used by java compiler and JVM. i.e. attached with class, interface, methods or fields

Annotations in java are used to provide additional information, so it is an alternative option for XML and java marker interfaces.

Types of Annotations:

There are two types,

  1. Built-in Java annotations
  2. Custom annotations


1)Built-In Java Annotations:
There are several built-in annotations in java. Some annotations are below,

Built-In Java Annotations used in java code
  • @Override
  • @SuppressWarnings
  • @Deprecated


Let's start with the built-in annotations first

@Override

@Override annotation assures that the subclass method is overriding the parent class method. If it is not so, compile time error occurs.

Note: Sometimes, we does the silly mistake such as spelling mistakes etc. So, it is better to mark @Override annotation that provides assurity that method is overridden.
else that method which we define acts like new method not the override method.

Using Annotation: gives assurity of method overriding etc.,

Without Annotation: Method acts as new .


@SuppressWarnings

@SuppressWarnings annotation: is used to suppress warnings issued by the compiler.

Sample code:
package SampleExcercises;

import java.util.*;

class Annotation2 {
@SuppressWarnings("unchecked")
public static void main(String args[]) {
ArrayList list = new ArrayList();
list.add("Mano");
list.add("Chella");
list.add("Prasath");

for (Object obj : list) {
System.out.println(obj);
}
}

}


Note: Since, we are using non-generic collection. If you remove the @SuppressWarnings("unchecked") annotation, it will show warning at compile time.


@Deprecated
@Deprecated annoation marks that this method is deprecated, so compiler prints warning. It informs user that it may be removed in the future versions. So, it is better not to use such methods.

Sample code:
package SampleExcercises;

class A{
void m(){System.out.println("hello m");}
 
@Deprecated
void n(){System.out.println("hello n");}
}
 
class Annotation3{
public static void main(String args[]){
 
A a=new A();
a.n();

}}

2)Custom Annotations:
Java Custom annotations or Java User-defined annotations are easy to create and use.

NOTE: The @interface element is used to declare an annotation.

Example:
@interface MyAnnotation{}

MyAnnotation is the custom annotation name.

Points to remember for java custom annotation signature:


  • Method should not have any throws clauses
  • Method should return one of the following: primitive data types, String, Class, enum or array of these data types.
  • Method should not have any parameter.
  • We should attach @ just before interface keyword to define annotation.
  • It may assign a default value to the method.

Types of Custom Annotations:

There are three types of Custom Annotations
  1. Marker Annotation
  2. Single-Value Annotation
  3. Multi-Value Annotation

1)Marker Annotation

An annotation that has no method, is called marker annotation. 

Example:
@interface MyAnnotation{} 
 
The @Override and @Deprecated are marker annotations.

2) Single-Value Annotation

An annotation that has one method, is called single-value annotation. 

Example:

@interface MyAnnotation{  
int value();  
}
  
We can provide the default value to method as well.

Example:

@interface MyAnnotation{  
int value() default 0;  

How to apply Single-Value Annotation:

Let's see the code to apply the single value annotation.

@MyAnnotation(value=10)  
Note: The value can be anything.

3) Multi-Value Annotation:

An annotation that has more than one method, is called Multi-Value annotation. 

Example:

@interface MyAnnotation{  
int value1();  
String value2();  
String value3();  
}  
 
We can provide the default value to method as well.

Example:

@interface MyAnnotation{  
int value1() default 1;  
String value2() default "";  
String value3() default "abs";  
 
How to apply Multi-Value Annotation:

Let's see the code to apply the multi-value annotation.

@MyAnnotation(value1=1,value2="Mano",value3="Chennai")  


Built-in Annotations used in custom annotations
  • @Target
  • @Retention
  • @Inherited
  • @Documented

@Target

@Target tag is used to specify at which type, the annotation is used.

The java.lang.annotation.ElementType enum declares many constants to specify the type of element where annotation is to be applied such as TYPE, METHOD, FIELD etc.,

Element TypesWhere the annotation can be applied
TYPEclass, interface or enumeration
FIELDfields
METHODmethods
CONSTRUCTORconstructors
LOCAL_VARIABLElocal variables
ANNOTATION_TYPEannotation type
PARAMETERparameter


Example to specify annoation for a class

@Target(ElementType.TYPE)  
@interface MyAnnotation
{  
int value1();  
String value2();  
}  

Example to specify annotation for a class, methods or fields

@Target({ElementType.TYPE, ElementType.FIELD, ElementType.METHOD})  
@interface MyAnnotation
{  
int value1();  
String value2();  

@Retention

@Retention annotation is used to specify to what level annotation will be available.

RetentionPolicyAvailability
RetentionPolicy.SOURCErefers to the source code, discarded during compilation. It will not be available in the compiled class.
RetentionPolicy.CLASSrefers to the .class file, available to java compiler but not to JVM . It is included in the class file.
RetentionPolicy.RUNTIMErefers to the runtime, available to java compiler and JVM .

Example to specify the RetentionPolicy

@Retention(RetentionPolicy.RUNTIME)  
@Target(ElementType.TYPE)  
@interface MyAnnotation{  
int value1();  
String value2();  
}  

@Inherited

By default, annotations are not inherited to subclasses. The @Inherited annotation marks the
annotation to be inherited to subclasses.

Example:

@Inherited  
@interface MyAnnotation { }//Now it will be available to subclass also 

@Documented

The @Documented Marks the annotation for inclusion in the documentation.

Fundamentals of Python programming

Fundamentals of Python programming: Following below are the fundamental constructs of Python programming: Python Data types Python...