Search Results

Search found 28760 results on 1151 pages for 'search folder'.

Page 127/1151 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Adding folder to Eclipse classpath

    - by Paul
    Hello, When i develop a project i create a folder in my project called libs. And in this folder i place all the library jars that i use. Is there a way to add just the libs folder to the class path so that i do not have to add each individual jar? I was thinking something along the lines of a variable or creating a user library. Many thanks.

    Read the article

  • Getting local My Documents folder path

    - by smsrecv
    In my C++/WinAPI application I get the My Documents folder path using this code: wchar_t path[MAX_PATH]; SHGetFolderPathW(NULL,CSIDL_PERSONAL,NULL,SHGFP_TYPE_CURRENT,path); One of the users runs my program on a pc connected to his corporate network. He has the My Documents folder on a network. So my code returns something like \paq\user.name$\My Documents Though he says he has a local copy of My Documents. The problem is that when he 'swaps VPN', the online My Documents becomes unavailable and my program crashes with the system error code 64 "The specified network name is no longer available" ( it tries to write to the file opened in the online my docs folder). How can I always get the local My Documents folder path using C++/WinAPI?

    Read the article

  • need a code snippet to find all *.html under a folder in nodejs

    - by Nicolas S.Xu
    I'd like to find all *.html files in src folder and all its sub folders using nodejs. What is the best way to do it? var folder = '/project1/src'; var extension = 'html'; var cb = function(err, results) { // results is an array of the files with path relative to the folder console.log(results); } // This function is what I am looking for. It has to recursively traverse all sub folders. findFiles(folder, extension, cb); I think a lot developers should have great and tested solution and it is better to use it than writing one myself.

    Read the article

  • XCode - Automatically add all files in a folder to a target

    - by Akshay
    In XCode, is there a way to specify that all files in a folder are compiled by a target. Eg. the 'Test' target automatically compiles all files in the 'Tests' folder, whereas the 'App' target compiles everything in the 'Sources' folder. Today, the way I'm doing it, is to add a file to a target everytime I create it. This feels a bit error prone and redundant since the files are already organized in the correct folders. Thanks.

    Read the article

  • Process locks a folder

    - by Vad
    Have pretty odd situation. There are 2 applications: 1) C:\MyFolder1\First.exe 2) C:\MyFolder2\Second.exe First.exe runs Second.exe and quits. Process.Start(@"C:\MyFolder2\Second.exe"); // And exit. Seconds.exe waits a few seconds and tries to remove "C:\MyFolder1\" folder. // Wait for 5 seconds - First.exe terminated by that time for 100% Directory.Delete(@"C:\MyFolder1\", true); Action fails with “The process cannot access the file ‘C:\MyFolder1\’ because it is being used by another process.” It's able to remove the First.exe file (actually all files in the folder), but not the folder itself. Does anybody have an idea why the folder is locked by the second process?

    Read the article

  • File folder sql programming

    - by eski
    I'm trying to figure out the best way to make a file folder system in sql. I'm making a website that will be using similar system as explorer in windows. You open your c: drive and you see there some folders and files, and you open one folder and you see more files and folders there. So what i'm asking, would i use one table for this and just point to the parent id number or what ? I have this in my head.. You make a Primary folder, it gets the u_id=1. Then i make a file in that folder, it gets u_id=2 and p_id=1 so i know its there right? same with the folders. This would all be in one table but i cant help thinking there is some major flaw in this..

    Read the article

  • My search what the Cloud will mean for my Work, part 2

    - by Kay Sellenrode
    My experience with the cloud and why work will change and not disappear. Until now I have multiple experiences with the cloud, for the most good. i have worked on multiple cloud solutions in the past but let me describe them as 0.x versions. For me the 1st real serious cloud experience was a bit more than 1 year ago, when our company switched from an in house server to Microsoft BPOS as a complete replacement. Since we are a small consultancy firm and don’t have that much else to do than consulting, our IT requirements are quite simple. We need Mail and Storage space for our documents. With the in house server we had multiple outages during a year, mostly by lack of administering. Being consultants in the field and hardly having time to maintain a server, BPOS was and still is for us the right solution. Since the migration we have less outages and a much more robust solution. Have we run into issues with BPOS for our own environment? No not that I’m aware of. Based on this experience I made a stance about deploy ability of BPOS and cloud solutions, they are suitable for MKB (Dutch for Medium and Small Businesses). Most Small businesses don’t have the amount of work to hire a full time it admin. Hiring a service provider to maintain their own server might be even more costly than hiring an admin. So seeing the capabilities of BPOS and the needs of most businesses I see it as a great solution that gives the business a complete Server replacement solution for a fixed price per user. resulting in a clear budget for IT spending, something most small businesses were looking for, for a long time. So right now I’m deploying BPOS with a customer, and I run into some of the Cloud 1.0 issues. In my opinion BPOS is a good working Cloud version 1.0 solution. What do I mean with 1.0? Well 1.0 is mostly a tested solution (unlike 0.x versions) but still have quite some limitations caused by too few market experience. in my opnion this is also the reason why we don’t see that much BPOS customers yet and why I think Office 365 will make a huge difference. What I have seen of 365 shows me it is a Cloud 2.0 version, meaning it has all needed features and is much more flexible to the customer. This is also why I see changes happen in my work field, changes and not unemployment due to Cloud solutions. Cloud 1.0 solutions gave me the idea that if every customer would adopt them I would be out of work. But in reality Cloud 1.0 solutions are here just to set the market needs. The Cloud 2.0 and higher versions will give the customer much more flexibility, but also require the need for a consultant. Where the 1.0 versions are simple to setup and maintain, the 2.0 solution needs more thought upfront and afterwards. ie. BPOS in its 1.0 version brings you a very simplified Exchange 2007 solution, Suitable for some customers. Looking at Office 365 you receive almost a full blown Exchange 2010 solution. I expect this to be even more customizable in the next version. In my search for the changes to my work I try to regulary write a post with my thought around the Cloud and the impact on my work as a consultant. I'm also planning to present around this topic, so if anyone is interested to see me present around this topic, you're more than welcome to contact me.

    Read the article

  • Improving the running time of Breadth First Search and Adjacency List creation

    - by user45957
    We are given an array of integers where all elements are between 0-9. have to start from the 1st position and reach end in minimum no of moves such that we can from an index i move 1 position back and forward i.e i-1 and i+1 and jump to any index having the same value as index i. Time Limit : 1 second Max input size : 100000 I have tried to solve this problem use a single source shortest path approach using Breadth First Search and though BFS itself is O(V+E) and runs in time the adjacency list creation takes O(n2) time and therefore overall complexity becomes O(n2). is there any way i can decrease the time complexity of adjacency list creation? or is there a better and more efficient way of solving the problem? int main(){ vector<int> v; string str; vector<int> sets[10]; cin>>str; int in; for(int i=0;i<str.length();i++){ in=str[i]-'0'; v.push_back(in); sets[in].push_back(i); } int n=v.size(); if(n==1){ cout<<"0\n"; return 0; } if(v[0]==v[n-1]){ cout<<"1\n"; return 0; } vector<int> adj[100001]; for(int i=0;i<10;i++){ for(int j=0;j<sets[i].size();j++){ if(sets[i][j]>0) adj[sets[i][j]].push_back(sets[i][j]-1); if(sets[i][j]<n-1) adj[sets[i][j]].push_back(sets[i][j]+1); for(int k=j+1;k<sets[i].size();k++){ if(abs(sets[i][j]-sets[i][k])!=1){ adj[sets[i][j]].push_back(sets[i][k]); adj[sets[i][k]].push_back(sets[i][j]); } } } } queue<int> q; q.push(0); int dist[100001]; bool visited[100001]={false}; dist[0]=0; visited[0]=true; int c=0; while(!q.empty()){ int dq=q.front(); q.pop(); c++; for(int i=0;i<adj[dq].size();i++){ if(visited[adj[dq][i]]==false){ dist[adj[dq][i]]=dist[dq]+1; visited[adj[dq][i]]=true; q.push(adj[dq][i]); } } } cout<<dist[n-1]<<"\n"; return 0; }

    Read the article

  • Large Queries in Google

    - by marienbad
    I have a large query I want to do in google. It's just a string of OR's for the purpose of determining which search terms are ranked the highest compared to the others. It's not ABSURDLY large -- it's only 5,500 characters. But Google says: Request-URI Too Large The requested URL /search... is too large to process. Is there a way around?

    Read the article

  • How to mount a USB thumb drive as "Fixed" instead of "Removable"

    - by AMissico
    I have over 8 GB in my "Code Library" that I maintain on a 64 GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot, because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable?

    Read the article

  • Enhanced Explorer for Windows XP? [closed]

    - by iceman
    Possible Duplicate: Replacement for Windows Explorer? Is there a better explorer for Windows XP with Konqueror-like features (like dual panes), etc. Specially an enhanced integrated search option like in Vista? There are different products which cumulatively present something similar like xplorer2, Google Desktop Search or Windows Grep. But what about one integrated product?

    Read the article

  • Searching within Gmail's nested labels [closed]

    - by Penang
    Consider this setup in my gmail inbox I have 3 lists mailing-lists/first-list mailing-lists/second-list mailing-lists/third-list if I want to search for all unread messages in any sublabel of mailing-lists, is there a better way to search than "is:unread label:mailing-list/first-list OR label:mailing-list/second-list OR label:mailing-list/third-list" something like "is:unread label:mailing-list/*" is what I'm looking for

    Read the article

  • Debugging iFilter plug-in (PDF indexing)

    - by Trevor Sullivan
    I have the official Adobe x64 iFilter PDF plug-in and the FoxIt Software iFilter PDF plug-in installed, and neither one seems to be allowing me to index the contents of PDF files. So far, I've: Added my data folder into the Indexing service configuration Ensured that PDF files are configured to index "file properties and contents" Rebuilt the index from scratch But, when I search, I can only search for PDF file names, not the contents of them. Any ideas on how to debug this issue?

    Read the article

  • How to Mount a USB "Thumb" Drive as "Fixed" in Windows (For Indexing)

    - by AMissico
    I have over 8GB in my "Code Library" that I maintain on a 64GB ScanDisk Ultra Backup USB Device. Windows Search 4.0 (installed on Windows XP) can index removable drives, but Windows 7 (which uses Windows Search 4.0) cannot because the USB device identifies itself as a Removable drive and Windows 7 refuses to index removable drives. How can I mount the USB Thumb Drive as Fixed instead of Removable? All suggestions welcome and greatly appreciated.

    Read the article

  • windows 7 start menu showing incorrect data

    - by madmik3
    Hi, I've tried to rebuild my search index but it does not seem to help. When i search for anything even command I either get an empty list or a list of short cuts with the names Programs Documents Files ... They all have the default white paper icon. If I click on them I get an error message that says Internet security settings prevented one or more files from being opened. Any ideas? thanks

    Read the article

  • Solr vs 'this' word

    - by s.arlashin
    There is a smal problem with solr. When I try to search text containing the word 'this' by issuing 'this' in the search console, solr doesn't find anything. However there are no problems with other words. Is it sort of reserved word or something like that?

    Read the article

  • How to get Ubuntu automatically connect to (windows, cabled & shared) network folder?

    - by Koen
    Through the normal processes I enter my shared music folder on my Windows computer, i.e.: Places Network Windows pc Music. After rebooting my Ubuntu laptop, however, this connection isn't automatically set again. My question: How do I get Ubuntu to automatically connect to that shared folder after login (while first checking whether the Windows computer is 'online')? This because I added the folder to the Banshee Library, and I currently first have to go to the shared folder manually before Banshee can play the files.

    Read the article

  • WebCenter Content Web Search Performance: Do you really need that folder path info?

    - by Nicolas Montoya
    End-users want content at their fingertips at the speed of thought if possible. When running search operations in the WebCenter Conter Web Interface every second or fraction of a second improvement does matter. When doing some trace analysis on the systemdatabase tracing on a customer environment, we came across some SQL queries that were unnecessarily being triggered! These were related to determining the folder path for every entry part of the search result set. However, this folder path was not even being used as part of the displayed information in the user interface.Why was the folder path information being collected when it was not even displayed in the UI? We found that the configuration parameter 'FolderPathInSearchResults' was set to 'true' under Administration > Admin Server > General Configuration > Additional Configuration Variables as shown below:When executing a quicksearch by keyword we were getting 100 out of 2280 entries in the first page of the result set.When thera 'FolderPathInSearchResults' configuration parameter is set to 'true', the following queries appear in the systemdatabase tracing:100 executions for a query on the FolderFiles table for each of the documents displayed in the first page:>systemdatabase/6       12.13 11:17:48.188      IdcServer-199   1.45 ms. SELECT * FROM FolderFiles WHERE dDocName='SLC02VGVUSORAC140641' AND fLinkRank=0[Executed. Returned row(s): true]382 executions for a query of the folders tables - most of the documents that match the keyword criteria are at a folder depth level of three or four:>systemdatabase/6       12.13 11:17:48.114      IdcServer-199   2.57 ms. SELECT FolderFolders.*,FolderMetaDefaults.* FROM FolderFolders,FolderMetaDefaults WHERE FolderFolders.fFolderGUID=FolderMetaDefaults.fFolderGUID(+) AND((FolderFolders.fFolderGUID = '1EB8E527E19B09ED3FE82EE310AEA13A' ) )[Executed.Returned row(s): true]By setting this 'FolderPathInSearchResults' configuration parameter to 'false', the above queries were no longer reported in the Server Output System Audit Information.Now, let's consider a practical scenario:Search result set page = 100Average folder depth der document in the search result set: 5The number of folder path related queries will be: 100 + 5*500 = 600If each query takes slightly over 3 ms. You would have 2000 ms (2 seconds) spent in server time to get this information.The overall performance impact goes beyond seerver time execution, as this information needs to travel from the server to the browser. If the documents are further nested into the folder hierarchy, additional hundreds of queries may be executed. If folder path is not being displayed in the end-user interface profile, your system may be better of with the 'FolderPathInSearchResults' configuration parameter disabled.

    Read the article

  • Can I place the Ubuntu One for Windows home sync folder anywhere on C:\ during installation?

    - by vonshavingcream
    My company does not allow us to keep personal files inside our personal folder. Something about the roaming profiles getting to large. With Dropbox I am able to set the destination of the folder during the install. Is there anyway to tell Ubuntu One where to put the Ubuntu One folder? I don't want to add external folders to the sync list, I just want to control where the installer creates the Ubuntu One folder. Otherwise I can't use the service :(

    Read the article

  • warning: dict_ldap_lookup: Search error 1: Operations error

    - by drecute
    Please I need help with ldap search filter to use to retrieve the user email information from ldap. I'm running postfix_ldap of Ubuntu server 12.04. Everything seems to work fine, except getting the values returned from the search. Version 1 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind = no domain = example.com Version 2 server_host = ldap://samba.example.com search_base = dc=company, dc=example, dc=com query_filter = mail=%s bind_dn = cn=Users,dc=company,dc=example,dc=com domain = example.com mail logs Nov 26 11:13:26 mail postfix/smtpd[19662]: match_string: example.com ~? example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_lookup: No existing connection for LDAP source /etc/postfix/ldap-aliases.cf, reopening Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Connecting to server ldap://samba.example.com Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Actual Protocol version used is 3. Nov 26 11:13:26 mail postfix/smtpd[19662]: dict_ldap_connect: Binding to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: dict_ldap_connect: Unable to bind to server ldap://samba.example.com with dn cn=Users,dc=company,dc=example,dc=com: 49 (Invalid credentials) Nov 26 11:13:26 mail postfix/smtpd[19662]: warning: ldap:/etc/postfix/ldap-aliases.cf lookup error for "[email protected]" Nov 26 11:13:26 mail postfix/smtpd[19662]: maps_find: virtual_alias_maps: [email protected]: search aborted Nov 26 11:13:26 mail postfix/smtpd[19662]: mail_addr_find: [email protected] -> (try again) Nov 26 11:13:26 mail postfix/smtpd[19662]: NOQUEUE: reject: RCPT from col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<col0-omc3-s2.col0.hotmail.com> Nov 26 11:13:26 mail postfix/smtpd[19662]: > col0-omc3-s2.col0.hotmail.com[65.55.34.140]: 451 4.3.0 <[email protected]>: Temporary lookup failure here's another log with successful search result but fialed to get the values of the result Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Using existing connection for LDAP source /etc/postfix/ldap-aliases.cf Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Searching with filter [email protected] Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Search found 1 match(es) Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_get_values[1]: Leaving dict_ldap_get_values Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: Search returned nothing Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: [email protected]: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key 'tola.akintola': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: tola.akintola: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: In dict_ldap_lookup Nov 26 12:04:56 mail postfix/smtpd[20463]: dict_ldap_lookup: /etc/postfix/ldap-aliases.cf: Skipping lookup of key '@example.com': domain mismatch Nov 26 12:04:56 mail postfix/smtpd[20463]: maps_find: virtual_alias_maps: @example.com: not found Nov 26 12:04:56 mail postfix/smtpd[20463]: mail_addr_find: [email protected] -> (not found) My refined ldap-aliases.cf looks like this: server_host = ldap://samba.example.com server_port = 3268 search_base = dc=company, dc=example, dc=com query_filter = mail=%s result_attribute = uid bind_dn = cn=Administrator,cn=Users,dc=company,dc=example,dc=com bind_pw = pass domain = example.com So I'll like to know what ldap filter is appropriate to get this to work. Thanks for helping out.

    Read the article

  • How to setup Lucene/Solr for a B2B web app?

    - by Bill Paetzke
    Given: 1 database per client (business customer) 5000 clients Clients have between 2 to 2000 users (avg is ~100 users/client) 100k to 10 million records per database Users need to search those records often (it's the best way to navigate their data) Possibly relevant info: Several new clients each week (any time during business hours) Multiple web servers and database servers (users can login via any web server) Let's stay agnostic of language or sql brand, since Lucene (and Solr) have a breadth of support For Example: Joel Spolsky said in Podcast #11 that his hosted web app product, FogBugz On-Demand, uses Lucene. He has thousands of on-demand clients. And each client gets their own database. They use an index per client and store it in the client's database. I'm not sure on the details. And I'm not sure if this is a serious mod to Lucene. The Question: How would you setup Lucene search so that each client can only search within its database? How would you setup the index(es)? Where do you store the index(es)? Would you need to add a filter to all search queries? If a client cancelled, how would you delete their (part of the) index? (this may be trivial--not sure yet) Possible Solutions: Make an index for each client (database) Pro: Search is faster (than one-index-for-all method). Indices are relative to the size of the client's data. Con: I'm not sure what this entails, nor do I know if this is beyond Lucene's scope. Have a single, gigantic index with a database_name field. Always include database_name as a filter. Pro: Not sure. Maybe good for tech support or billing dept to search all databases for info. Con: Search is slower (than index-per-client method). Flawed security if query filter removed. One last thing: I would also accept an answer that uses Solr (the extension of Lucene). Perhaps it's better suited for this problem. Not sure.

    Read the article

  • RoboCopy Log File Analysis

    - by BobJim
    Is it possible to analyse the log text file outputted from RoboCopy and extract the lines which are defined as "New Dir" and "Extra Dir"? I would like the line from the log contain all the details returned regarding this "New Dir" or "Extra Dir" The reason for completing this task is to understand how two folder structures have change over time. One version has been kept internally at the parent company, the second has been used by a consultancy. For your information I am using Windows 7.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >