Search Results

Search found 19742 results on 790 pages for 'search tree'.

Page 116/790 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Large Queries in Google

    - by marienbad
    I have a large query I want to do in google. It's just a string of OR's for the purpose of determining which search terms are ranked the highest compared to the others. It's not ABSURDLY large -- it's only 5,500 characters. But Google says: Request-URI Too Large The requested URL /search... is too large to process. Is there a way around?

    Read the article

  • How to mount a USB thumb drive as "Fixed" instead of "Removable"

    - by AMissico
    I have over 8 GB in my "Code Library" that I maintain on a 64 GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot, because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable?

    Read the article

  • Enhanced Explorer for Windows XP? [closed]

    - by iceman
    Possible Duplicate: Replacement for Windows Explorer? Is there a better explorer for Windows XP with Konqueror-like features (like dual panes), etc. Specially an enhanced integrated search option like in Vista? There are different products which cumulatively present something similar like xplorer2, Google Desktop Search or Windows Grep. But what about one integrated product?

    Read the article

  • Searching within Gmail's nested labels [closed]

    - by Penang
    Consider this setup in my gmail inbox I have 3 lists mailing-lists/first-list mailing-lists/second-list mailing-lists/third-list if I want to search for all unread messages in any sublabel of mailing-lists, is there a better way to search than "is:unread label:mailing-list/first-list OR label:mailing-list/second-list OR label:mailing-list/third-list" something like "is:unread label:mailing-list/*" is what I'm looking for

    Read the article

  • Debugging iFilter plug-in (PDF indexing)

    - by Trevor Sullivan
    I have the official Adobe x64 iFilter PDF plug-in and the FoxIt Software iFilter PDF plug-in installed, and neither one seems to be allowing me to index the contents of PDF files. So far, I've: Added my data folder into the Indexing service configuration Ensured that PDF files are configured to index "file properties and contents" Rebuilt the index from scratch But, when I search, I can only search for PDF file names, not the contents of them. Any ideas on how to debug this issue?

    Read the article

  • How to Mount a USB "Thumb" Drive as "Fixed" in Windows (For Indexing)

    - by AMissico
    I have over 8GB in my "Code Library" that I maintain on a 64GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable? All suggestions welcome and greatly appreciated.

    Read the article

  • windows 7 start menu showing incorrect data

    - by madmik3
    Hi, I've tried to rebuild my search index but it does not seem to help. When i search for anything even command I either get an empty list or a list of short cuts with the names Programs Documents Files ... They all have the default white paper icon. If I click on them I get an error message that says Internet security settings prevented one or more files from being opened. Any ideas? thanks

    Read the article

  • Solr vs 'this' word

    - by s.arlashin
    There is a smal problem with solr. When I try to search text containing the word 'this' by issuing 'this' in the search console, solr doesn't find anything. However there are no problems with other words. Is it sort of reserved word or something like that?

    Read the article

  • warning: dict_ldap_lookup: Search error 1: Operations error

    - by drecute
    Please I need help with ldap search filter to use to retrieve the user email information from ldap. I'm running postfix_ldap of Ubuntu server 12.04. Everything seems to work fine, except getting the values returned from the search. Version 1 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind = no domain = example.com Version 2 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind_dn = cn=Users,dc=company,dc=example,dc=com domain = example.com mail logs Nov 26 11:13:26 mail postfix/smtpd[19662]: match_string: example.com ~? example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_lookup: No existing connection for LDAP source /etc/postfix/ldap-aliases.cf, reopening Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Connecting to server ldap://samba.example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Actual Protocol version used is 3. Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Binding to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: dict_ldap_connect: Unable to bind to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com: 49 (Invalid credentials) Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: ldap:/etc/postfix/ldap-aliases.cf lookup error for "[email protected]" Nov 26 11:13:26 mail postfix/smtpd[19662]: maps_find: virtual_alias_maps: [email protected]: search aborted Nov 26 11:13:26 mail postfix/smtpd[19662]: mail_addr_find: [email protected] -> (try again) Nov 26 11:13:26 mail postfix/smtpd[19662]: NOQUEUE: reject: RCPT from col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<col0-omc3-s2.col0.hotmail.com> Nov 26 11:13:26 mail postfix/smtpd[19662]: > col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure here's another log with successful search result but fialed to get the values of the result Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Using existing connection for LDAP source /etc/postfix/ldap-aliases.cf Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Searching with filter [email protected] Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Search found 1 match(es) Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Leaving dict_ldap_get_values Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Search returned nothing Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: [email protected]: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key 'tola.akintola': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: tola.akintola: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key '@example.com': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: @example.com: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: mail_addr_find: [email protected] -> (not found) My refined ldap-aliases.cf looks like this: server_host = ldap://samba.example.com server_port = 3268 search_base = dc=company, dc=example, dc=com query_filter = mail=%s result_attribute = uid bind_dn = cn=Administrator,cn=Users,dc=company,dc=example,dc=com bind_pw = pass domain = example.com So I'll like to know what ldap filter is appropriate to get this to work. Thanks for helping out.

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • New record may be written twice in clusterd index structure

    - by Cupidvogel
    As per the article at Microsoft, under the Test 1: INSERT Performance section, it is written that For the table with the clustered index, only a single write operation is required since the leaf nodes of the clustered index are data pages (as explained in the section Clustered Indexes and Heaps), whereas for the table with the nonclustered index, two write operations are required—one for the entry into the index B-tree and another for the insert of the data itself. I don't think that is necessarily true. Clustered Indexes are implemented through B+ tree structures, right? If you look at at this article, which gives a simple example of inserting into a B+ tree, we can see that when 8 is initially inserted, it is written only once, but then when 5 comes in, it is written to the root node as well (thus written twice, albeit not initially at the time of insertion). Also when 8 comes in next, it is written twice, once at the root and then at the leaf. So won't it be correct to say, that the number of rewrites in case of a clustered index is much less compared to a NIC structure (where it must occur every time), instead of saying that rewrite doesn't occur in CI at all?

    Read the article

  • Searchlogic cannot sort search result

    - by jaycode
    Imagine a code: search = Project.search( :title_or_description_or_child_name_or_child_age_or_inspiration_or_decorating_style_or_favorite_item_or_others_like_any => keys, :galleries_id_like_any => @g, :styles_id_like_any => @st, :tags_like_any => @t ) search.all returns the rows correctly. But search.descend_by_views returns nil. Is this gem buggy? What else should I use then?

    Read the article

  • Global Search In Android

    - by Sunil
    Hi, I want Messaging (Sms/Mms) application to be part of the global search i.e., when I type a word in the omni search box (on home screen), it should show me the messages which has that word. Currently, we have local search within messaging app which works fine but how do I make messaging globally searchable. I tried refereing searchable dictornary example and also some online resource but didn't help. Please provide me the steps to make messaging application part of global search. Regards, Sunil.

    Read the article

  • Search functionality in a lightbox

    - by Salil
    Hi All, I have a page on which there is a link search. onclicking search i open a LIGHTBOX where i get a textfield and Search button. Now i want when i enter keyword in textfield and enter on submit button a result should be shown in a lighbox only. is it possible w/o ajax or i have to use ajax form only. just curious why we used get request instead of post request for Search functionality.

    Read the article

  • What is difference between "Office SharePoint Server Search" and "Windows SharePoint Services Searc

    - by Keshaw Khandelwal
    Hi, I have MOSS 2007 environment with multiple WFE servers. can any one know what is difference between "Office SharePoint Server Search" and "Windows SharePoint Services Search ". Which service i have to start ? If i have to Start "Office SharePoint Server Search" then what is the meaning of giving "Windows SharePoint Services Search " in Central Administration. ---Keshaw

    Read the article

  • I simple search controller that stores search history, should I use resource routing or non-resource?

    - by vfilby
    I am learning rails and am toying with a simple web-app that integrates with flickr to search photos based on user given criteria and store the query in a search history table. I am seeking the best or 'rails' way of handling this. Should I setup a controller and non-resource routes that handle the search and store the data in a custom table; or should I create a resource for queries with a resource route and an additional path for search?

    Read the article

  • Using - Instead of + in WordPress Search Permalink

    - by poer
    Options +FollowSymLinks RewriteEngine On RewriteCond %{THE_REQUEST} ^[A-Z]+ /(#[^?& ]*)??([^& ]*&)?s=([^& ]+)[^ ]* HTTP/ RewriteRule ^$ http://wordpressblog.com/search/%3? [R=301,L] Currently I use the above .htaccess mod_rewrite rule to convert default WordPress search permalink: http://wordpressblog.com/?s=key+word into nice permalink like this: http://wordpressblog.com/search/key+word My question is: What part of the mod_rewrite rule above that I need to change to get a nicer permalink like this one: http://wordpressblog.com/search/key-word.html Thanks.

    Read the article

  • Why this search can not generate correct result?

    - by user482742
    Hi, All: Below is to find same customer and if he is in list, the number add one. If he is not in the list, just add him in the list. I use Search function to do this, but failed and generated incorrect records. It can not find the customer or the right number of customers. But if I use For..loop to iterate the list, it does well and can find the customer and add new customer in that for..loop search procedure. (I did not paste for ..loop search procedrue here). Another problem is that there is no difference between setting list.sorted true and false. It seems Search function is not correct. This search function is from an example of delphi textbook. The below is with Delphi 7. Thank you. Procedure Form1.create; begin list:=Tstringlist.create; list.sorted:=true; // Search function will generate exactly Same and Incorrect //records no matter list.sorted is set true or false. list.duplicates:=dupignore; .. end; Procedure addcustomer; var .. begin while p1.MatchAgain do begin //p1 is regular expression customer:=p1.MatchedExpression; if (search(customer)=false) then begin list.Add(customer+'=1'); end; allcustomer:=allcustomer+1; .. end; Function Tform1.search(customer: string): boolean; var fre:string; num:integer; L:integer; R:integer; M: Integer; CompareResult: Integer; found: boolean; begin result:=false; found:=false; L := 0; R := List.Count - 1; while (L <= R) and ( not found ) do begin M := (L + R) div 2; CompareResult := Comparetext(list.Names[m]), customer); if (compareresult=0) then begin fre:=list.ValueFromIndex [m]; num:=strtoint(fre); num:=num+1; list.ValueFromIndex[m]:=inttostr(num); Found := True; Result := true; exit; end else if compareresult > 0 then r := m - 1 else l := m + 1; end; end;

    Read the article

  • Outlook 2007 OST File Indexing and OneNote 2007 Indexing are Broken

    - by Matt
    I'm running Outlook 2007 under Windows 7 Home Premium RTM. My OST file was previously being properly indexed but eventually searches significantly slowed down so I suspected a problem. Searching and indexing appears broken in OneNote 2007 as well as search time is now significantly longer. I brought up the Outlook 2007 Search Options dialog and noticed that my mailbox (running from an Exchange 2007 server) wasn't listed in the "Index messages in these data files:" list box. Next I ran the Windows "Find and fix problem with windows search" wizard which reported no errors. Then I brought up the Windows Indexing Options dialog which shows Outlook listed (as shown here): then clicked Advanced and Rebuilt the index. No dice - the listbox in the Outlook 2007 dialog still didn't show my mailbox. When I clicked the Modify button in the Indexing Options dialog I see the following: When I hover over the "oneindex://..." entry, the alt text indicates "This location is currently unavailable". When I delete it and rebuild the index, this entry returns. UPDATE: Comparison of the last screenshot above with a working PC shows that on the broken PC, the lower half of the dialog lists Outlook but neither Outlook or OneNote are showing in the upper half. The working PC has Outlook and OneNote in both parts of the dialog.

    Read the article

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >