Search Results

Search found 82071 results on 3283 pages for 'file url'.

Page 352/3283 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Is there application which is fakes browser and allows to choose what real to use if url provided

    - by Dzmitry Lahoda
    Is there any Application for Windows to do next think: I click url in Skype or html file in Explorer. Application is default "fake" browser, i.e. registered as default browser. Application shows several buttons. Each button represents installed or running browser. I can choose real browser, click it and specific url opened in chosen real browser . Quick search not revealed such Application. Context: I work in environment where some sites work in specific browsers. I get clickable urls from different applications. Sometimes I want to launch specific browser to use specific addin of it against url provided. I have specific portable "secured" browser I want to launch only for trusted sites.

    Read the article

  • Add a right click context menu in certain WIndows folders that will open a web browser URL?

    - by jasondavis
    In a Windows Explorer window where you browse files in Windows 7, I would like to add a new context menu that will allow me to open a file on my local Dev Server. So it would have to open a browser like Google Crome and the URL would have to be the file path but slightly different removing part of it and prepending my localhost URL. For example if the file I am right clicking on, the path for that file might be... E:\Server\htdocs\labs\php\testProject\test.php I would need a button to click in the context menu Open in Browser and it would open my Web Browser with a URL like this... http://localhost/labs/php/testProject/test.php I would love to be able to do this, any ideas or help would greatly be appreciated! To go one step further, would to be able to somehow make the context menu item only show up on File that are under this folder.... ``E:\Server\htdocs` but this is far less important.

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • linux automatic change permissions in resolv.file

    - by rikr
    In various linux servers I see how the permissions of the /etc/resolv.conf file change automatically. In state normal: -r--r--r-- 1 root root 103 Jul 4 11:50 resolv.conf In changed state: -r--r----- 1 root root 103 Jul 4 11:50 resolv.conf I installed auditd for monitoring it, and these are the two entries between the change: type=PATH msg=audit(07/04/2012 12:20:02.719:303) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:20:02.719:303) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:20:02.719:303) : arch=x86_64 syscall=open success=yes exit=3 a0=7feeb1405dec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3445 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) type=PATH msg=audit(07/04/2012 12:50:03.727:304) : item=0 name=/etc/resolv.conf inode=137102 dev=fe:00 mode=file,440 ouid=root ogid=root rdev=00:00 type=CWD msg=audit(07/04/2012 12:50:03.727:304) : cwd=/ type=SYSCALL msg=audit(07/04/2012 12:50:03.727:304) : arch=x86_64 syscall=open success=yes exit=3 a0=7f2bcf7abdec a1=0 a2=1b6 a3=0 items=1 ppid=1585 pid=3610 auid=unset uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=(none) ses=4294967295 comm=hostid exe=/usr/bin/hostid key=(null) any ideas?

    Read the article

  • Migrating ODBC information through a batch file

    - by DeskSide
    I am a desktop support technician currently working on a large scale migration project across multiple sites. I am looking at a way to transfer ODBC entries from Windows XP to Windows 7. If anyone knows of a program or anything prebuilt that already does this, please redirect me. I've already looked but haven't found anything, so I'm trying to build my own. I know enough basic programming to read the work of others and monkey around with something that already exists, but not much else. I have come across a custom batch file written at one site that (among other things) exports ODBC information from the old computer and stores it on a server (labelled as y: through net use at the beginning of the file), then later transfers it from the server to a new computer. The pre-existing code is for Windows XP to XP migrations. Here are the pertinate bits of code: echo Exporting ODBC Information start /wait regedit.exe /e "y:\%username%\odbc.reg" HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI (and later on) echo Importing ODBC start /wait regedit /s "y:\%username%\odbc.reg" We are now migrating from Windows XP to 7, and this part of the batch file still seems to work for this particular site, where Oracle 8i and 10g are used. I'm looking to use my cut down version of this code at multiple sites, and I'm wondering if the same lines of code will still work for anything other than Oracle. Also, my research on this issue has shown that there are different locations in 64 bit operating systems for 32/64 bit entries, and I'm wondering what effect that would have on the code. Could I copy the same data to both parts of the registry, in hopes of catching everything? Any assistance would be appreciated. Thank you for your time.

    Read the article

  • Where is the php.ini file in Ubuntu hardy?

    - by April
    Where is the php.ini file in Ubuntu hardy? The file found in path called /etc/php5/apache2/php.ini.ucf-dist. Do I need to rename file 'php.ini.ucf-dist' to 'php.ini'? Where is the system currently reading the php information when I run info.php file? I tried to compare one item what gets displayed when I run info.php file and when I edit the 'php.ini.ucf-dist' file. The 'memory_limit' shows 128M when I run info.php file but in the 'php.ini.ucf-dist' file it is set to 16M. So this cant be the same file being read by the system currently? Thanks for help.

    Read the article

  • different user group can not upload file in the server

    - by Dallal
    I have a CentOS server running in Thailand, and I'm in Canada. The guy at the computer center who set up the server for me doesn't really understand much about linux and left me off an issue to solve myself. I just moved from Mac Server to Linux server, and the first thing I'm facing a problem now is `file name` has failed to upload due to an error The uploaded file could not be moved to `location name` So what happen is that I knew from my experiences of these problem is all about permissions. So I go ahead and checked on my whole folder and found that everything in the folder permission is like myusername mygroupname then I checked the httpd file in the server and it is default to apache apache. My question is that how can I make my user to be in the same group with apache group so that I don't have to have any problem about uploading, changing data in my file....? But without having to affect other user in the same server. I'm holding Administrator account, but not root account, but I can change stuff on the server root no problem. When I was with godaddy.com there never been any problem about the permission and I wish I know how they configure that :(

    Read the article

  • Escaping %’s in file-/folder-names at the command-line

    - by Synetech
    Does anybody of a way to access files and directories that have a % in their name (which is valid) from the command-line? Specifically, if there are two %’s and the text between them happens to correspond to an environment variable. For example, if there is a file called C:\blah\%temp%.txt or a folder called C:\Program Files\%temp%\, none of the following will work because the variable gets expanded: > dir "c:\blah\%temp%.txt" > dir "c:\blah\^%temp^%.txt" > dir "c:\blah\%%temp%%.txt" > dir "c:\blah\\%temp\%.txt" > dir "c:\program files\%temp%" > dir "c:\program files\^%temp^%" > dir "c:\program files\%%temp%%" > dir "c:\program files\\%temp\%" Using wildcards will work, but does not uniquely select the file/folder and may include others: > dir "c:\blah\?temp?.txt"        (also shows ztempz.temp, 1tempa.txt, etc.) > dir "c:\program files\?temp?"   (likewise) (This is frustrating because every now and then—usually when Explorer is restarted for whatever reason—the environment variables stop expanding and some places where they are used end up creating files or directories with the environment variable in it. For example, because I configured Chromium to store its cache in a subdirectory of %temp%, if the variable expands, it is fine, but when it doesn’t, Chromium creates a directory called %temp% under its own directory and stores the cache—which can get large—there. I want to add a line to my temp-/junk-file cleaning script to automatically delete that folder if it exists, but I cannot figure out how to access it from the command-line without resorting to wildcards.)

    Read the article

  • How to fix Software Center crashes?

    - by shandna towns
    E: Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list E: The list of sources could not be read. shandanatowns@ubuntu:~$ software-center 2012-09-25 12:23:35,115 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-09-25 12:23:35,123 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:34:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:56:22: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:20: Not using units is deprecated. Assuming 'px'. (software-center:4524): Gtk-WARNING **: Theme parsing error: softwarecenter.css:60:22: Not using units is deprecated. Assuming 'px'. 2012-09-25 12:23:35,472 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-09-25 12:23:35,477 - softwarecenter.fixme - WARNING - logs to the root logger: '('/usr/lib/python2.7/dist-packages/gi/importer.py', 51, 'find_module')' 2012-09-25 12:23:35,477 - root - ERROR - Could not find any typelib for LaunchpadIntegration 2012-09-25 12:23:35,605 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-09-25 12:23:35,987 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 257, in open self._cache = apt.Cache(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 102, in __init__ self.open(progress) File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 149, in open self._list.read_main_list() SystemError: E:Type '<!DOCTYPE' is not known on line 1 in source list /etc/apt/sources.list.d/medibuntu.list 2012-09-25 12:23:37,000 - softwarecenter.db.enquire - ERROR - _get_estimate_nr_apps_and_nr_pkgs failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/db/enquire.py", line 115, in _get_estimate_nr_apps_and_nr_pkgs tmp_matches = enquire.get_mset(0, len(self.db), None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__' Traceback (most recent call last): File "/usr/bin/software-center", line 182, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1385, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1323, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 151, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 173, in init_view self.cache, self.db, self.icons, self.apps_filter) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 81, in __init__ self.build() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 322, in build self._build_homepage_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 120, in _build_homepage_view self._append_whats_new() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 251, in _append_whats_new whats_new_cat = self._update_whats_new_content() File "/usr/share/software-center/softwarecenter/ui/gtk3/views/lobbyview.py", line 236, in _update_whats_new_content docs = whats_new_cat.get_documents(self.db) File "/usr/share/software-center/softwarecenter/db/categories.py", line 132, in get_documents nonblocking_load=False) File "/usr/share/software-center/softwarecenter/db/enquire.py", line 317, in set_query self._blocking_perform_search() File "/usr/share/software-center/softwarecenter/db/enquire.py", line 212, in _blocking_perform_search matches = enquire.get_mset(0, self.limit, None, xfilter) File "/usr/share/software-center/softwarecenter/db/appfilter.py", line 89, in __call__ if (not pkgname in self.cache and File "/usr/share/software-center/softwarecenter/db/pkginfo_impl/aptcache.py", line 277, in __contains__ return self._cache.__contains__(k) AttributeError: 'NoneType' object has no attribute '__contains__'

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Issue Parsing File with YAML-CPP

    - by Andrew
    In the following code, I'm having some sort of issue getting my .yaml file parsed using parser.GetNextDocument(doc);. After much gross debugging, I've found that the (main) issue here is that my for loop is not running, due to doc.size() == 0; What am I doing wrong? void BookView::load() { aBook.clear(); QString fileName = QFileDialog::getOpenFileName(this, tr("Load Address Book"), "", tr("Address Book (*.yaml);;All Files (*)")); if(fileName.isEmpty()) { return; } else { try { std::ifstream fin(fileName.toStdString().c_str()); YAML::Parser parser(fin); YAML::Node doc; std::map< std::string, std::string > entry; parser.GetNextDocument(doc); std::cout << doc.size(); for( YAML::Iterator it = doc.begin(); it != doc.end(); it++ ) { *it >> entry; aBook.push_back(entry); } } catch(YAML::ParserException &e) { std::cout << "YAML Exception caught: " << e.what() << std::endl; } } updateLayout( Navigating ); } The .yaml file being read was generated using yaml-cpp, so I assume it is correctly formed YAML, but just in case, here's the file anyways. ^@^@^@\230--- - address: ****************** comment: None. email: andrew(dot)levenson(at)gmail(dot)com name: Andrew Levenson phone: **********^@ Edit: By request, the emitting code: void BookView::save() { QString fileName = QFileDialog::getSaveFileName(this, tr("Save Address Book"), "", tr("Address Book (*.yaml);;All Files (*)")); if (fileName.isEmpty()) { return; } else { QFile file(fileName); if(!file.open(QIODevice::WriteOnly)) { QMessageBox::information(this, tr("Unable to open file"), file.errorString()); return; } std::vector< std::map< std::string, std::string > >::iterator itr; std::map< std::string, std::string >::iterator mItr; YAML::Emitter yaml; yaml << YAML::BeginSeq; for( itr = aBook.begin(); itr < aBook.end(); itr++ ) { yaml << YAML::BeginMap; for( mItr = (*itr).begin(); mItr != (*itr).end(); mItr++ ) { yaml << YAML::Key << (*mItr).first << YAML::Value << (*mItr).second; } yaml << YAML::EndMap; } yaml << YAML::EndSeq; QDataStream out(&file); out.setVersion(QDataStream::Qt_4_5); out << yaml.c_str(); } }

    Read the article

  • C# SQLite file import prevent duplicates

    - by jakesankey
    Hi, I am attempting to get a directory (which is ever-growing) full of .txt comma delimited files to import into my SQLite db. I now have all of the files importing ok, however I need to have some way of excluding the files that have been previously added to db. I have a column in the db called FileName where the name and extension are stored next to each record from each file. Now I need to say 'If the code finds XXX.txt and XXX.txt is already in db, then skip this file'. Can I somehow add this logic to the getfiles command or is there another easy way? using (SQLiteCommand insertCommand = con.CreateCommand()) { SQLiteCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); foreach (string file in files) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" SELECT COUNT(*) FROM Import WHERE FileName = @FileExt;"; cmdd.Parameters.Add(new SQLiteParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); //int count = ((IConvertible)insertCommand.ExecuteScalar().ToInt32(null)); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SQLiteParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Value", DbType.String)); insertCommand.Parameters.Add(new SQLiteParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@OutOfTol", DbType.Decimal)); insertCommand.Parameters.Add(new SQLiteParameter("@Comment", DbType.String)); string FileNameExt = Path.GetFileName(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SQLiteParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SQLiteParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SQLiteParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SQLiteParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SQLiteParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SQLiteParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); }

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Can I attach data gathered by a form to a file that is being uploaded?

    - by Jacob
    I need customers to upload files to my website and I want to gather their name or company name and attach it to the file name or create a folder on the server with that as the name so we can keep the files organized. Using PHP to upload file PHP: if(isset($_POST['submit'])){ $target = "upload/"; $file_name = $_FILES['file']['name']; $tmp_dir = $_FILES ['file']['tmp_name']; try{ if(!preg_match('/(jpe?g|psd|ai|eps|zip|rar|tif?f|pdf)$/i', $file_name)) { throw new Exception("Wrong File Type"); exit; } move_uploaded_file($tmp_dir, $target . $file_name); $status = true; } catch (Exception $e) { $fail = true; } } Other PHPw/form: <form enctype="multipart/form-data" action="" method="post"> input type="hidden" name="MAX_FILE_SIZE" value="1073741824" /> label for="file">Choose File to Upload </label> <br />input name="file" type="file" id="file" size="50" maxlength="50" /><br /> input type="submit" name="submit" value="Upload" /> php if(isset($status)) { $yay = "alert-success"; echo "<div class=\"$yay\"> <br/> <h2>Thank You!</h2> <p>File Upload Successful!</p></div>"; } if(isset($fail)) { $boo = "alert-error"; echo "<div class=\"$boo\"> <br/> <h2>Sorry...</h2> <p>There was a problem uploading the file.</p><br/><p>Please make sure that you are trying to upload a file that is less than 50mb and an acceptable file type.</p></div>"; }

    Read the article

  • Java File I/O problems

    - by dwwilson66
    This is my first time working with file i/o in java, and it's not working. The section of the program where I parse individual lines and output a semicolon delimited line works like a charm when I hardcode a file and display on screen. Whne I try to write to a file public static OutputStream... errors out as an illegal start to expression, and I've been unable to get the program to step through an entire directory of files instead of one at a time. Where I'm not clear: I'm note setting an output filename anywhere...whare am I supposed to do that? The path variable won't pass. What's the proper format for a path? Can anyone see what I need to debug here? import java.io.*; public class FileRead { public static void main(String args[]) { try { // Open the file(s) // single file works OK FileInputStream fstream = new FileInputStream("testfile.txt"); Path startingDir = R:\Data\cs\RoboHelp\CorrLib\Output\Production\WebHelp; PrintFiles pf = new PrintFiles(); Files.walkFileTree(startingDir, pf); // Get the object of DataInputStream DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in)); String inputLine; String desc = ""; String docNo = ""; String replLtr = ""; String specCond = ""; String states = ""; String howGen = ""; String whenGen = ""; String owner = ""; String lastChange = ""; //Read File Line By Line while ((inputLine = br.readLine()) != null) { int testVal=0; int stringMax = inputLine.length(); // if(inputLine.startsWith("Description")) {desc = inputLine.substring(13,inputLine.length());} else if(inputLine.startsWith("Reference Number")) {docNo = inputLine.substring(20,inputLine.length());} else if(inputLine.startsWith("Replaces Letter")) {replLtr = inputLine.substring(17,inputLine.length());} else if(inputLine.startsWith("Special Conditions")) {specCond = inputLine.substring(21,inputLine.length());} else if(inputLine.startsWith("States Applicable")) {states = inputLine.substring(19,inputLine.length());} else if(inputLine.startsWith("How Generated")) {howGen = inputLine.substring(15,inputLine.length());} else if(inputLine.startsWith("When Generated")) {whenGen = inputLine.substring(16,inputLine.length());} else if(inputLine.startsWith("Owner")) {owner = inputLine.substring(7,inputLine.length());} else if(inputLine.startsWith("Last Change Date")) {lastChange = inputLine.substring(17,inputLine.length());} } //close while loop // Print the content on the console String outStr1 = (desc + ";" + docNo + ";" + replLtr + ";" + specCond + ";" + states); String outStr2 = (";" + howGen + ";" + whenGen + ";" + owner + ";" + lastChange); String outString = (outStr1 + outStr2); System.out.print(inputLine + "\n" + outString); String lineItem = (outStr1+outStr2); // try (OutputStream out = new BufferedOutputStream (logfile.newOutputStream(CREATE, APPEND))) { out.write(lineItem, 0, lineItem.length); } catch (IOException x) { System.err.println(x); } public static OutputStream newOutputStream() throws IOException { // append to an existing file, create file if it doesn't initially exist out = Files.newOutputStream(c:, CREATE, APPEND); } //Close the input stream in.close(); } catch (Exception e) { //Catch exception if any System.err.println("Error: " + e.getMessage()); } } }

    Read the article

  • Nginx - Redirect any Subdomain to File without Rewriting

    - by Waffle
    Recently I have switched from Apache to Nginx to increase performance on a web server running Ubuntu 11.10. I have been having issues trying to figure out how certain things work in Nginx compared to Apache, but one issue has been stumping me and I have not been able to find the answer online. My problem is that I need to be able to redirect (not rewrite) any sub-domain to a file, but that file needs to be able to get the sub-domain part of the URL in order to do a database look-up of that sub-domain. So far, I have been able to get any sub-domain to rewrite to that file, but then it loses the text of the sub-domain I need. So, for example, I would like test.server.com to redirect to server.com/resolve.php, but still remain as test.server.com. If this is not possible, the thing that I would need at the very least would be something such as going to test.server.com would go to server.com/resolve.php?=test . One of these options must be possible in Nginx. My config as it stands right now looks something like this: server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name www.server.com server.com; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; } location /images { root /usr/share; autoindex off; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } server { listen 80 default; server_name *.server.com; rewrite ^ http://www.server.com/resolve.php; } As I said before, I am very new to Nginx, so I have a feeling the answer is pretty simple, but no examples online seem to deal with just redirects without rewrites or rewriting with the sub-domain section included. Any help on what to do would be most appreciated and if any one has a better idea to accomplish what I need, I am also open to ideas. Thank you very much.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • .NET file Decryption - Bad Data

    - by Jon
    I am in the process of rewriting an old application. The old app stored data in a scoreboard file that was encrypted with the following code: private const String SSecretKey = @"?B?n?Mj?"; public DataTable GetScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardLocation); if (!f.Exists) { return setupNewScoreBoard(); } DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } This works fine. I have copied the code into my new app so that I can create a legacy loader and convert the data into the new format. The problem is I get a "Bad Data" error: System.Security.Cryptography.CryptographicException was unhandled Message="Bad Data.\r\n" Source="mscorlib" The error fires at this line: dTable.ReadXml(new StreamReader(cryptostreamDecr)); The encrypted file was created today on the same machine with the old code. I guess that maybe the encryption / decryption process uses the application name / file or something and therefore means I can not open it. Does anyone have an idea as to: A) Be able explain why this isn't working? B) Offer a solution that would allow me to be able to open files that were created with the legacy application and be able to convert them please? Here is the whole class that deals with loading and saving the scoreboard: using System; using System.Collections.Generic; using System.Text; using System.Security.Cryptography; using System.Runtime.InteropServices; using System.IO; using System.Data; using System.Xml; using System.Threading; namespace JawBreaker { [Serializable] class ScoreBoardLoader { private Jawbreaker jawbreaker; private String sSecretKey = @"?B?n?Mj?"; private String scoreBoardFileLocation = ""; private bool keepScoreBoardUpdated = true; private int intTimer = 180000; public ScoreBoardLoader(Jawbreaker jawbreaker, String scoreBoardFileLocation) { this.jawbreaker = jawbreaker; this.scoreBoardFileLocation = scoreBoardFileLocation; } // Call this function to remove the key from memory after use for security [System.Runtime.InteropServices.DllImport("KERNEL32.DLL", EntryPoint = "RtlZeroMemory")] public static extern bool ZeroMemory(IntPtr Destination, int Length); // Function to Generate a 64 bits Key. private string GenerateKey() { // Create an instance of Symetric Algorithm. Key and IV is generated automatically. DESCryptoServiceProvider desCrypto = (DESCryptoServiceProvider)DESCryptoServiceProvider.Create(); // Use the Automatically generated key for Encryption. return ASCIIEncoding.ASCII.GetString(desCrypto.Key); } public void writeScoreboardToFile() { DataTable tempScoreBoard = getScoreboardFromFile(); //add in the new scores to the end of the file. for (int i = 0; i < jawbreaker.Scoreboard.Rows.Count; i++) { DataRow row = tempScoreBoard.NewRow(); row.ItemArray = jawbreaker.Scoreboard.Rows[i].ItemArray; tempScoreBoard.Rows.Add(row); } //before it is written back to the file make sure we update the sync info if (jawbreaker.SyncScoreboard) { //connect to webservice, login and update all the scores that have not been synced. for (int i = 0; i < tempScoreBoard.Rows.Count; i++) { try { //check to see if that row has been synced to the server if (!Boolean.Parse(tempScoreBoard.Rows[i].ItemArray[7].ToString())) { //sync info to server //update the row to say that it has been updated object[] tempArray = tempScoreBoard.Rows[i].ItemArray; tempArray[7] = true; tempScoreBoard.Rows[i].ItemArray = tempArray; tempScoreBoard.AcceptChanges(); } } catch (Exception ex) { jawbreaker.writeErrorToLog("ERROR OCCURED DURING SYNC TO SERVER UPDATE: " + ex.Message); } } } FileStream fsEncrypted = new FileStream(scoreBoardFileLocation, FileMode.Create, FileAccess.Write); DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); DES.Key = ASCIIEncoding.ASCII.GetBytes(sSecretKey); DES.IV = ASCIIEncoding.ASCII.GetBytes(sSecretKey); ICryptoTransform desencrypt = DES.CreateEncryptor(); CryptoStream cryptostream = new CryptoStream(fsEncrypted, desencrypt, CryptoStreamMode.Write); MemoryStream ms = new MemoryStream(); tempScoreBoard.WriteXml(ms, XmlWriteMode.WriteSchema); ms.Position = 0; byte[] bitarray = new byte[ms.Length]; ms.Read(bitarray, 0, bitarray.Length); cryptostream.Write(bitarray, 0, bitarray.Length); cryptostream.Close(); ms.Close(); //now the scores have been added to the file remove them from the datatable jawbreaker.Scoreboard.Rows.Clear(); } public void startPeriodicScoreboardWriteToFile() { while (keepScoreBoardUpdated) { //three minute sleep. Thread.Sleep(intTimer); writeScoreboardToFile(); } } public void stopPeriodicScoreboardWriteToFile() { keepScoreBoardUpdated = false; } public int IntTimer { get { return intTimer; } set { intTimer = value; } } public DataTable getScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardFileLocation); if (!f.Exists) { jawbreaker.writeInfoToLog("Scoreboard not there so creating new one"); return setupNewScoreBoard(); } else { DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(sSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(sSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardFileLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } } public DataTable setupNewScoreBoard() { //scoreboard info into dataset DataTable scoreboard = new DataTable("scoreboard"); scoreboard.Columns.Add(new DataColumn("playername", System.Type.GetType("System.String"))); scoreboard.Columns.Add(new DataColumn("score", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("ballnumber", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("xsize", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("ysize", System.Type.GetType("System.Int32"))); scoreboard.Columns.Add(new DataColumn("gametype", System.Type.GetType("System.String"))); scoreboard.Columns.Add(new DataColumn("date", System.Type.GetType("System.DateTime"))); scoreboard.Columns.Add(new DataColumn("synced", System.Type.GetType("System.Boolean"))); scoreboard.AcceptChanges(); return scoreboard; } private void Run() { // For additional security Pin the key. GCHandle gch = GCHandle.Alloc(sSecretKey, GCHandleType.Pinned); // Remove the Key from memory. ZeroMemory(gch.AddrOfPinnedObject(), sSecretKey.Length * 2); gch.Free(); } } }

    Read the article

  • Limit WebClient DownloadFile maximum file size

    - by Jack Juiceson
    Hi everyone, In my asp .net project, my main page receives URL as a parameter I need to download internally and then process it. I know that I can use WebClient's DownloadFile method however I want to avoid malicious user from giving a url to a huge file, which will unnecessary traffic from my server. In order to avoid this, I'm looking for a solution to set maximum file size that DownloadFile will download. Thank you in advance, Jack

    Read the article

  • Indexing CSV file contents in Python

    - by Hossein
    Hi, I have a very large CSV file contaning only two fields (id,url). I want to do some indexing on the url field with python, I know that there are some tools like Whoosh or Pylucene. but I can't get the examples to work. can someone help me with this?

    Read the article

  • No bean named 'springSecurityFilterChain' is defined

    - by michaeljackson4ever
    When configs are loaded, I get the error SEVERE: Exception starting filter springSecurityFilterChain org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'springSecurityFilterChain' is defined My sec-config: <http use-expressions="true" access-denied-page="/error/casfailed.html" entry-point-ref="headerAuthenticationEntryPoint"> <intercept-url pattern="/" access="permitAll"/> <!-- <intercept-url pattern="/index.html" access="permitAll"/> --> <intercept-url pattern="/index.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/history.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/absence.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/search.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employees.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employee.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/contract.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/myforms.html" access="hasAnyRole('HLO','OPISK')"/> <intercept-url pattern="/vacationmsg.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/redirect.jsp" filters="none" /> <intercept-url pattern="/error/**" filters="none" /> <intercept-url pattern="/layout/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/**" access="isAuthenticated()" /> <!-- session-management invalid-session-url="/absence.html"/ --> <!-- logout logout-success-url="/logout.html"/ --> <custom-filter ref="ssoHeaderAuthenticationFilter" before="CAS_FILTER"/> <!-- CAS_FILTER ??? --> </http> <authentication-manager alias="authenticationManager"> <authentication-provider ref="doNothingAuthenticationProvider"/> </authentication-manager> <beans:bean id="doNothingAuthenticationProvider" class="com.nixu.security.sso.web.DoNothingAuthenticationProvider"/> <beans:bean id="ssoHeaderAuthenticationFilter" class="com.nixu.security.sso.web.HeaderAuthenticationFilter"> <beans:property name="groups"> <beans:map> <beans:entry key="cn=lake,ou=confluence,dc=utu,dc=fi" value="ROLE_ADMIN"/> </beans:map> </beans:property> </beans:bean> <beans:bean id="headerAuthenticationEntryPoint" class="com.nixu.security.sso.web.HeaderAuthenticationEntryPoint"/> And web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml /WEB-INF/sec-config.xml /WEB-INF/idm-config.xml /WEB-INF/ldap-config.xml </param-value> </context-param> <display-name>KeyCard</display-name> <context-param> <param-name>webAppRootKey</param-name> <param-value>KeyCardAppRoot</param-value> </context-param> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.properties</param-value> </context-param> <!-- Reads request input using UTF-8 encoding --> <filter> <filter-name>characterEncodingFilter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>characterEncodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <listener> <!-- this is for session scoped objects --> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <!-- Handles all requests into the application --> <servlet> <servlet-name>KeyCard</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>tiles</servlet-name> <servlet-class>org.apache.tiles.web.startup.TilesServlet</servlet-class> <init-param> <param-name> org.apache.tiles.impl.BasicTilesContainer.DEFINITIONS_CONFIG </param-name> <param-value> /WEB-INF/tilesViewContext.xml </param-value> </init-param> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>KeyCard</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <session-config> <session-timeout> 120 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <!-- error-page> <exception-type>java.lang.Exception</exception-type> <location>/WEB-INF/error/error.jsp</location> </error-page --> </web-app> What's wrong?

    Read the article

  • Mac OS - Download an save an image file on HDD with Cocoa

    - by Julien
    I'm building a program, and I'm quite confident using Objective-C, but I don't know how to programmatically download a file from the web and copy it on the hard drive. I started with : NSString url = @"http://spiritofpolo.com/images/logo.png"; NSData* data = [NSData dataWithContentsOfURL:[NSURL URLWithString:url]]; But then I don't know what to do with the data... that sucks, no ;) Can somebody help?

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >