- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: duplicates in a file
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-14-2006 10:43 PM
05-14-2006 10:43 PM
I am trying to write a small script but struggling what to use:
I have a file with the user id then some loggin info however this is duplicated for some instances ie:
time_last_login =
tty_last_login =
host_last_login =
unsuccessful_login_count =
i need to search the file to find the user id, keeep the first instance then delete the rest but I do not know howto search userid then delete 6 lines from where the uid is.
any ideas will be a great help.
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2006 12:05 AM
05-15-2006 12:05 AM
Re: duplicates in a file
Attached a little script, which may be a step towards your solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2006 12:23 AM
- Tags:
- missing attachment
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2006 12:38 AM
05-15-2006 12:38 AM
Re: duplicates in a file
Can you post the script that you are working on,
else use this command which will provide inputs that you are looking for in one line.
testos1:/usr/sbin/acct # ./fwtmp < /var/adm/wtmp
testos1:/usr/sbin/acct # ./fwtmp
Chan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2006 01:06 AM
05-15-2006 01:06 AM
Re: duplicates in a file
I don't know if the userid you mention is literally found as "
If you mean to keep the rest of the file without duplicate entries and a record consists of skip=6 lines:
awk -v us="userid" -v skip=6 '$1 == us {if(have_us) for(i=0;i
}
{print}' myfile
To get only the first record of "userid" and nothing else:
awk -v us="userid" -v skip=6 '$1 == us {for(i=0;i
mfG Peter
- Tags:
- awk
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2006 02:34 AM
05-15-2006 02:34 AM
Re: duplicates in a file
gordon and Peter your examples are cool.
Thanks