- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: AWK script for more than 200 fields
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2009 10:46 AM
тАО04-24-2009 10:46 AM
Re: AWK script for more than 200 fields
I had one of those "Well...duh!" moments....
didn't particularly *look* at the input, only that the used $0 and a substring, and not $200, $250 or $NF. Oh well...
Just out of curiosity, I did run JRFs stuff thru GNU awk, which *didn't* have a problem with the file at all and produced correct results.
Then, had it '{print NF}' using the sample data posted and the default separators...max was 46. So the sample doesn't appear to be "representative"
of course, given that he *doesn't* reference anything but $0, the "-F" should certainly getting him going, one would think (at least until the length of a record becomes an issue)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-24-2009 10:59 AM
тАО04-24-2009 10:59 AM
Re: AWK script for more than 200 fields
Duhr. Duhr duhr duhr duhr. Duhr.
HP-Server-Literate since 1979
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2009 01:27 AM
тАО04-25-2009 01:27 AM
Re: AWK script for more than 200 fields
How to upgrade the system to use gawk option:
The server specification:
HP-UX dhp0037 B.10.20 A 9000/851 2013207678 two-user license
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2009 03:14 AM
тАО04-25-2009 03:14 AM
Re: AWK script for more than 200 fields
gawk == gnu awk, which you said wasn't supported and now, not installed. (Did you look in /usr/local/bin/*awk?
This is no need to use gawk, if you just use -F"dummy separator".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-25-2009 08:59 AM
тАО04-25-2009 08:59 AM
Re: AWK script for more than 200 fields
which means that either "gawk" isn't installed, or that it can't be found in $PATH. But as noted previously, on several occasions, all you need to do is use the "-F" switch and set the field separator to something not used in the data(pehaps "|")
that should suffice to eliminate the "too many fields" error.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2009 11:49 AM
тАО04-26-2009 11:49 AM
Re: AWK script for more than 200 fields
While you can use any character or regular expression for your inter-field delimiter, I chose the _nul_ character as one that seems highly unlikely to be found in your data and thus the most likely to prevent 'awk' from splitting your input into too many fields.
This is the reason I wrote:
# awk -F"\000" '{print $1;print substr($0,1,2)}' /tmp/toomany
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2009 12:01 PM
тАО04-26-2009 12:01 PM
Re: AWK script for more than 200 fields
Yes, that's why I suggested to use literally this long string: -F"dummy separator"
If your "fs" is more than one char, you go through the ERE engine. Your -F"\000" is an ERE.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2009 01:33 PM
тАО04-26-2009 01:33 PM
Re: AWK script for more than 200 fields
> Dennis: ...that's why I suggested to use literally this long string: -F"dummy separator" If your "fs" is more than one char, you go through the ERE engine. Your -F"\000" is an ERE.
OK, and do you say "ERE" because 'awk' supports the Posix ERE engine as opposed to the Posix RE engine?
Too, I could have (should have?) used simply :
"\0"
...in lieu of:
"\000"
Anyway, wouldn't the engine do _less_ work when attempting to match only one character then poteentially matching the "d" in "dummy..." and then having to assess the "u", before (e.g. finding none), bump to the next character in the input string and start all over?
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2009 10:25 PM
тАО04-26-2009 10:25 PM
Re: AWK script for more than 200 fields
That's still two chars.
>wouldn't the engine do _less_ work when attempting to match only one character
Yes but it still switches to the ERE path.
If you used a control-A, it would be almost as unique as that NUL.
- « Previous
-
- 1
- 2
- Next »