- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: identify duplicates in a file
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2003 02:09 PM
10-29-2003 02:09 PM
I have a file which has around 500 numbers and some numbers are duplicate. Is there any way to find out which numbers have duplicate entry in the file.
Thanks,
Anand.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2003 02:24 PM
10-29-2003 02:24 PM
Re: identify duplicates in a file
cat your_file | uniq -d
Will give you the entries that are repeated.
(File is ASCII file)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2003 05:22 PM
10-29-2003 05:22 PM
Re: identify duplicates in a file
Ayup uniq will do the trick.
Now if you want to do something more then just print the numbers and then you might go perl:
perl -e 'while (<>){ if (defined($x{$_})) { print } else { $x{$_}=1 }}' < yourfile
replace the 'print' to something weird or wonderful at your whim.
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2003 07:17 PM
10-29-2003 07:17 PM
Re: identify duplicates in a file
ie, it will not find "line 3" in
line 1
line 2
line 3
line 4
line 3
I would use sort and sort -u to create 2 files, and diff to compare them.
sort file > f1
sort -u file > f2
diff f1 f2
This will show all duplicate lines, prefixed with "<".
If you want to take out the noise, use
diff f1 f2|grep "^<"|cut -c 3-
-- Graham
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-29-2003 07:21 PM