Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 38/45 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

  • Programming tourism

    - by Andrew_B
    I'm going on vacation to Paris, France for 10 days. Actually, it's my girlfriend's wish to go there but I'm not very interested in visiting, sightseeing, etc. Recently, I came up with an idea of trying to do something like programming tourism. :) I'd like to do something related to programming in a startup-like company. I do not want a salary or any kind of compensation. I want to overview process, social aspects, environment and "what it feels like" to development software in another country. I'm from Russia. I've been a software developer since 2003. I prefer C#4 but I'm ready to use anything Turing-complete. I have some MS certifications and am familiar with all .NETs since 1.1. Currently I'm finishing PhD in CS. I'm interested in multidimensional indexing and I can turn any piece of data and code to OLAP system. :) But it'd take too much time. What can I do? I have no more than one week. I want a totally complete project in a short amount of time. Implement some features in well-tested project Do a code review Debug memory, performance and concurrency issues Do unit testing So, about the questions: Is it legal? I'm ready to sign NDA if it's necessary. I'll have tourist visa. Is it possible? I'm really sure that bureaucratic companies with lots of HRs and PMs will not allow such experiments. But small companies can afford it. I'm ready to guarantee support on my code after leaving home :) P.S. I still havn't started learning French :) I hope it will not take too much time :) P.P.S. Yes, it's girlfriend-approved. What's in it for me? It's fun. It's fun to see new systems and people who created them. It's fun to complete meaningful things. Quickly. What's in it for them? Feature, debug, review or test. If my short-term colleagues will like this style of working I can invite them to make same trip into my company :) I think in Russia it's even more exciting :)

    Read the article

  • Hyper-V Server 2012 with Zambezi AMD FX-Series - Hardware assisted virtualization not present

    - by Vazgen
    I'm trying to set up VDI across Windows Server 2012 VMs running on Hyper-V 2012. The wizard's compatibility check for the Virtualization Host server failed with "Hardware-assisted virtualization is not present on the server". I'm running an FX-8120 CPU and have the ASUS M5A97 motherboard. I know I'm supposed to enable No-Execute (Hyper-V Hardware Considerations) but I cannot find that or any other synonyms of it in my motherboards UEFI BIOS (NX, XD, EVP, XN... nothing). I found this: PAE/NX/SSE2 Support Requirement Guide for Windows 8 which in short says "Windows 8 and Windows Server 2012 requires that systems must have processors that support NX, and NX must be turned on for important security safeguards to function effectively and avoid potential security vulnerabilities." this leads me to believe NX is on by default if I was able to get this far and install Hyper-V 2012 and Windows Server 2012.. Also I tried to disable AVX in cmd with "bcdedit /set xsavedisable 1". Did not resolve My processor is Zambezi FX-8120 and also supports RVI/SLAT/other synonym: processor: Newegg Processor FX-8120 support proof: AMD Processors with Rapid Virtualization Indexing Required to Run Hyper-V in Windows 8 What's going on here? I bought this CPU specifically after I had the same problems with an older AMD Athelon II and made sure to buy one with AMD-V and RVI. Thank you

    Read the article

  • Embedded Spotlight does not function in Outlook 2011

    - by syntaxcollector
    I have a rather strange problem. I manage a network of about 35 Mac, and we all recently switched from Mail.app to Outlook 2011 (Please don't debate this, I've already had this conversation ad nauseum) We are using network home directories (NHD) server from a Windows file server over the SMB protocol. The problem I'm having is Spotlight does not function inside of Outlook. But ONLY inside of Outlook. The global Spotlight can find all email and contacts with Outlook, but the embedded Spotlight cannot. As a test, I took one of my users and switched them from a network home directory to a portable home directory (PHD) (this means the home folder was copied to the local hard drive). This resulted in a working Spotlight within Outlook, as soon as I switched the user back to an NHD, however, it stopped working. I have already tried erasing the Spotlight index and killing the process to force re-indexing. I have exhausted all Spotlight troubleshooting, and since the global is working that is obviously not the issue. I believe it has something to do with the Spotlight plugin Microsoft wrote that is located in /Library/Spotlight. Any ideas?

    Read the article

  • Atlassian Crucible very slow on large repository

    - by Mitch Lindgren
    Hi everyone, My company has been running a trial of Atlassian Crucible for some months now. For repositories where it's working properly, users have given very positive feedback about the tool. The problem I'm having is that we have several different projects, each with its own repository, and some of those repositories are very large. One repository in particular has a large number of branches and probably around 9,000 files per branch. Browsing that repository in Crucible is extremely slow. Crucible is running on a CentOS VM. The VM has 4GB of RAM, and I've set Crucible's maximum at 3GB, of which it is currently using 2GB. I've brought this up in a support ticket with Atlassian, and they suggested the following: In particular because you have a rather large SVN repository you will likely find that Fisheye will be creating a large index file on disk. To help improve performance a few things you can try are: Increasing the available memory available to Fisheye (see the document above). Migrating to an external database: confluence.atlassian.com/display/FISHEYE/Migrating+to+an+External+Database Excluding files and directories from your index that aren't needed: confluence.atlassian.com/display/FISHEYE/Allow+(Process) (Sorry for not hyperlinking; don't have the rep.) I've tried all of these things to an extent, but so far none have helped greatly. I was originally running Crucible on a Windows box with 2GB of RAM using the built in HSQL DB. Moving to MySQL on CentOS saw a performance increase for some repositories, and made Crucible much more stable, but did not seem to help much with our biggest repository. There are only so many files/branches I can exclude from indexing while maintaining the tool's usefulness. That being the case, does anyone have any tips on how to speed up Crucible on large repositories, without investing in insanely powerful hardware? Thanks! Edit: To clarify, since I didn't mention it explicitly above, I am using FishEye.

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • How to make a huge ram drive?

    - by Brandon Moore
    At my old job when a report was needed I could sit down with someone and pull up results and get immediate feedback, and then refine my queries and ultimately have the data we needed, in the format we needed within 30-90 minutes. I just started working for a new company with a database containing millions of records and I spent my whole 8 hours making a report that I feel I could have made in less than 2 hours if it were not for the massive amount of data the queries are working with, and the fact that I couldn't ask the person needing the data to sit down with me and give me feedback as I pulled up results as I am used to. So I am trying to think of how we can make the server faster... much faster, so that I can have the same level of productivity I'm used to. One thought that just came to mind is that memory is so cheap these days, and by my calculations I could buy 10 8gig ram sticks for 1000 bucks. What I have never heard of though is a device that would let me combine these into a huge ram drive. So I'd like to know if any such device exists, and if not what is the largest ram drive I could realistically make and how would I go about doing so? EDIT: To you guys who are saying the database shema needs to be analyzed... you can't make a query such as "Select f1, f2, f3, etc from SomeTable" run any faster by normalizing or indexing the table. What I'm talking about IS ABSOLUTELY a need for improved performance at the hardware level. I am used to having results come back to me in a few seconds, not a few minutes or much less a half an hour. Maybe that's what you guys are used to who have 100 billion record tables and you feel like that's fast, but I'm looking for results back from tables with about 10 million records to come back to me withing less than half a minute TOPS.

    Read the article

  • SharePoint Search: processing filenames containing underscores

    - by Todd Owen
    We use SharePoint Server 2007 to allow employees to search network file shares, but it seems that underscores in filenames are not treated as word separators when indexing the files. As a result, a search for chocolate will: match "chocolate milkshake.doc" but not match "chocolate_cake.doc" (Of course, this is a simplified example; in practice the content of the second file might include the word "chocolate" and match on that instead of the filename. But the problem itself is real enough, because a common scenario in a corporate environment is that a user knows the the partial name of the file they are looking for and expects to see matching filenames at the top of the search results. And using underscores in filenames is a widely used convention within our company). Underscores are not treated as word separators in the file content either, although this is less of a concern for us. The root cause of this problem is possibly related to the behaviour of the word breakers that SharePoint uses (i.e. the language-specific DLLs that implement the IWorkBreaker interface), although I haven't confirmed this yet. Does anyone know of a workaround for this issue? I have tested with Search Server 2008 Express too (which is based on the same technology), and it is also affected. I do not know whether the problem is fixed in SharePoint 2010 or not.

    Read the article

  • Web based file search in the lan?

    - by Magnetic_dud
    I would like to search files in my lan easily. (over 500k files on SMB shares, it would take ages with other ways) I mean, i just need to do a quick search on file names, i don't care content indexing at all, as most of my files are in a proprietary format, and the file name is explicative enough. But, date range filters are a must for me. I just need a quick search like voidtools' everything can do, but in a network way The files are on a WHS box (lol, Videos and Music share names are not appropriate for a company, but a license for that win2003-based os is cheaper than an xp home one!) I tried: Lansearch pro: it is not good for me, as i need a quick index Network Search Engine: it would be perfect, but does not offer a date range filter Microsoft Search Server 2008 Express, but it is horrible! First, does NOT index filenames, and then, my Core2Duo is not powerful enough to run it smoothly. Google Desktop with a proxy on localhost to make it run on the lan, but i don't like the hacked result. The preinstalled Windows Search 4.0 but it sucks totally in choosing the relevance of data - uninstalled Docco... what's that? I am considering to try: Ibm omnifind DocFetcher (can it work as a client? did not investigated yet) Strigi (it looks like that it can work as a client, right?) Any ideas/suggestions?

    Read the article

  • redirect landing without changing url

    - by Gerard
    I'm working with Apache and Joomla. I would like to have some URIs that land to the same landing page, but avoiding the URL changes. So, for example: examplecom/luggage/delay/iberia examplecom/luggage/delay/aireuropa examplecom/luggage/delay/ryanair Need to point to examplecom/luggage/delay but the address bar must show the different airline companies. I've been messing arround with mod_rewrite with the following codes: 1: RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-iberia [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-vueling [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-ryanair [NC,OR] RewriteCond %{REQUEST_URI} /equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa [NC,OR] RewriteRule ^(.*)$ /equipaje/problemas-de-equipaje/perdida-equipaje [P] 2: RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-iberia equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-vueling equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-ryanair equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] RewriteRule ^equipaje/problemas-de-equipaje/perdida-equipaje-aireuropa equipaje/problemas-de-equipaje/perdida-equipaje [NC,L] The idea is to have different urls for AdWords but then I'll avoid the indexing of that company landpages to avoid replicated content. I've been searching for several examples and trying to understand how the module works, but until now I haven't acomplished with that... Is it possible to do that with mod_rewrite? Many thanks!

    Read the article

  • Search inside of text files

    - by Matt
    So here is the situation. I currently run a mail server for my small non-profit company. My mail server (Merak Mail Server) keeps logs in .log files and mail as .tmp files. Essentially these are just text files that are kept on the server. Problem is that when I put text into the "Containing text" field on Windows Explorer, it always misses the files and tells me no results returned. Then when I search the files one by one (painful at best), I find the files I need. Do I not understand the search feature well enough, or maybe I have it indexing wrong. I really don't care what I need to use to search the files, even a third-party app is fine with me, I just want to type an email address into a box and search all of my log files or email files and find out which one I am looking for. It can be Windows Search or something else, as long as I can find a way to get the job done I will be happy. Pay solutions are fine as well. Thanks everyone in advance.

    Read the article

  • Utility to grant admin rights to a user in Windows XP for few hours/days?

    - by user15660
    Hi, I have two accounts on my windows xp home desktop. The default regular user is used for everything and the 2nd user which has admin rights is used only for installations. I do this to avoid malware infestations during web browsing and limited user account is guarding against online threats to a good extend but many programs refuse to run under limited rights like revo uninstaller. many installs i run from limited user by selectin "run as" from right click context menu of the .exe file. but some apps need admin rights for certain. I use "switch user" to go to admin mode and do the install/uninstall. but the admin user has none of my preferences bookmarks setup nor has my locate32 indexing done and ready for fast search Is there a utility which I can use "run as" login in administration login and use that to grant my limited user admin rights for a limited amount of period like few hours or days? Please help. I guess MS might have closed many doors of it for fear of exploitation of the API. are there any?

    Read the article

  • Trying to limit IMAP folders/mailboxes my iPhone/iPad sees

    - by QuantumMechanic
    (Note: I am using dovecot 1.0.10 on Ubuntu 8.04.4 LTS. Yes, I know I need to upgrade before next year :) (Note: The SMTP/IMAP server in question only serves my family, so there's only a very few users. Certainly what I propose below, even it it works, would be a logistical nightmare with any significant number of users). I have noticed (and have confirmed via google) that the iOS mail app is terrible in its handling of IMAP subscriptions, namespaces, etc. For example, my iPhone and iPad will see EVERYTHING (all mailboxes, folders, etc.), whereas clients like Thunderbird, alpine, etc. only see what I tell them to see. This makes it an incredible pain to move mail between mailboxes because I have to scroll through a gazillion things. The mail_location in dovecot.conf is: mail_location = mbox:%h/Mail/:INBOX=/var/mail/%u To get around this, I've been considering doing the following for user foo: Create a dovecot userdb with a foo-ios virtual user in it, whose UID is identical to that of the real (in /etc/passwd) foo user and with a homedir of /home/foo-ios. ln -s /var/mail/foo /var/mail/foo-ios mkdir -p /home/foo-ios/Mail cd /home/foo-ios/Mail ln -s /home/foo/Mail/mailbox-i-want-visible mailbox-i-want-visible Make symlinks for the rest of limited set of mailboxes/folders I want visible to the iOS mail app. chown -R foo:foo /home/foo-ios Change iOS mail app settings to log in as user foo-ios instead of user foo. Will this work or will there be some index/file corruption hell because there will be two sets of indexes (one set living in /home/foo/Mail/.imap and other set living in /home/foo-ios/Mail/.imap) indexing the same underlying mbox files? And I'd be more than happy to hear of a better way to do this with dovecot! (Or to hear that dovecot 2.x works better with iOS devices).

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Windows 7 BSOD when I walk away from the computer

    - by bobobobo
    I have a really weird experience with this new AM3 box I set up. Everything seems fine at first. But now I get frequent (daily) BSOD's, mostly when I walk away from the computer for more than 5 minutes (when the computer is idle). The BSOD's as shown by BlueScreenView almost always have to do with ntoskrnl.exe, and they'll do with really normal sounding operations, like SYSTEM_SERVICE_EXCEPTION, NTFS_FILE_SYSTEM, DRIVER_IRQL_NOT_LESS_OR_EQUAL, KMODE_EXCEPTION_NOT_HANDLED, BAD_POOL_HEADER - You name it. These are basically them, and they repeat after that, just random order. Its not like one is more consistently the problem than the other. I have Windows update ON and I let it do its updates everytime it wants to. I turned windows indexing service off, but it seems Windows 7 does a lot of background processing when I'm away - I'll come into the room and the fan will be going nuts (I'm pretty sure this isn't because the computer flies into a panic whenever I leave). I tried finding updates for my ASUS, and really there isn't much to install (except some new driver firmware). I'm going to install that to see if it helps, but what else could it be? Is it possible its a hardware issue with the board? Or is every Windows 7 user experiencing daily BSOD these days that I don't know about?

    Read the article

  • Dynamically blocking excessive HTTP bandwidth use?

    - by Jeff Atwood
    We were a little surprised to see this on our Cacti graphs for June 4 web traffic: We ran Log Parser on our IIS logs and it turns out this was a perfect storm of Yahoo and Google bots indexing us.. in that 3 hour period, we saw 287k hits from 3 different google ips, plus 104k from yahoo. Ouch? While we don't want to block Google or Yahoo, this has come up before. We have access to a Cisco PIX 515E, and we're thinking about putting that in front so we can dynamically deal with bandwidth offenders without touching our web servers directly. But is that the best solution? I'm wondering if there is any software or hardware that can help us identify and block excessive bandwidth use, ideally in real time? Perhaps some bit of hardware or open-source software we can put in front of our web servers? We are mostly a windows shop but we have some linux skills as well; we're also open to buying hardware if the PIX 515E isn't sufficient. What would you recommend?

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • program to track recent saved files or downloads?

    - by DiegoDD
    Back to windows XP times, i used a program named Ava Find that can find any file in the system, but a really nice feature it had, called "scout bot" that automatically finds new files, that is, recently created. More info here: AvaFind For example, if i saved or downloaded a file (in any program), but i cant remember its name or where i put it, i just simply looked at the scoutbot list, and it showed me the most recent files, and then i could just open them from there, or jump to their path. Since i changed to windows 7 (passing through vista), the program no longer works. It simply keeps indexing forever and never finds anything. I don't know if it is not compatible with win 7 itself, or because the fact that my windows 7 is 64 bit. I already tried to contact the Avafind creators, but they never answered, and the program seems to be no longer supported (although the site still works). Now, is there a similar program that finds RECENT files? e.g. listing the most recently saved files, system-wide (or even limiting it to some folders, discs)? And that of course, that is confirmed to work with win 7 64 bit. I know that there are many programs that can find any file, like everything, but what i want is to find recent files and their paths, without knowing their name or e. "everything" also can sort results by date, which is almost what i need, simply to look for part of the file name or extension, and get the most recent one. but if i get to many results, it takes a while, plus, i STILL need to know the name or at least extension of the file. What i want is simply a "Most recently Saved files" tool. i.e., a replacement for Scout Bot in Ava Find. Do you know any alternative?

    Read the article

  • Best SSD tweaks for Windows 7

    - by Nick Berardi
    I have seen many articles about tweaking an SSD, but many of them seem outdated, or too broad (read all Windows XP, Vista, and Windows 7 general tweaks). And I know that Windows 7 has been specifically tweaked for SSD by the Windows team, so I don't want to do something that was written for Windows XP in mind and end up circumventing something the Windows team has specifically designed in to Windows 7. So my question is what are the best SSD tweaks for Windows 7 that you have found to get the performance out of your drive? I hope to make a comprehensive list in the answers below so there won't be so much disinformation in the forums about what to do and what not to do. Here are a few that I see posted up on the forums alot, and some questions to get the discussion started: Disabling Superfetch. Yes or No Disabling Page File or limiting it to a really small size such as 500 MB. Disabling Indexing. Yes or No Disabling Defragmenting. Yes or No What are your thoughts do you have any that have worked for you? When providing an answer please do your best to back it up with a reason and possibly some documentation from MSDN, TechNet, or another credible source.

    Read the article

  • Utility to grant admin rights to a user in Windows XP for few hours/days?

    - by user15660
    I have two accounts on my windows xp home desktop. The default regular user is used for everything and the 2nd user which has admin rights is used only for installations. I do this to avoid malware infestations during web browsing and limited user account is guarding against online threats to a good extend but many programs refuse to run under limited rights like revo uninstaller. many installs i run from limited user by selectin "run as" from right click context menu of the .exe file. but some apps need admin rights for certain. I use "switch user" to go to admin mode and do the install/uninstall. but the admin user has none of my preferences bookmarks setup nor has my locate32 indexing done and ready for fast search Is there a utility which I can use "run as" login in administration login and use that to grant my limited user admin rights for a limited amount of period like few hours or days? Please help. I guess MS might have closed many doors of it for fear of exploitation of the API. are there any?

    Read the article

  • Win7 'locking' process/files/folders?

    - by Dynde
    I've had a fair bit problems with sometimes files/folders/processes being 'locked' by Windows. The weird thing is, it's not like the traditional sense, I think, where tools like UnlockIT and wholockme would work. It seems that just giving it a little often helps - making me think it could either be the HDD, the memory, or something in Windows. A scenario: I go into a folder - don't open anything at all, go back up, cut or drag-move the folder to someplace else, it says "Action can't be completed because the folder or a file in it is open in another program". Waiting sometimes 20 seconds sometimes a little longer, and I can move it. Another scenario is deleting a bunch of files in a folder, and it appears that everything is gone, but then suddenly after a few seconds an .exe file pops back up, and I can't delete it. Waiting a few minutes, then pressing refresh and it's gone. I have the strangest feeling that there's a problem with either HDD or memory. I already tried disabling Windows indexing service with no luck. Does anyone have any ideas? EDIT: I should say, that I have a very fast system, 16 GB DDR3 RAM, i7-2600k CPU, SSD main HDD, so I really should not be experiencing any sort of problems, where one might say that it's "reasonable" for the system not to respond right away. Edit2: And I updated SSD firmware a couple months ago, so it shouldn't be bad release FW either

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >