Search Results

Search found 18409 results on 737 pages for 'large projects'.

Page 30/737 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Controlling access to large files in Apache

    - by obeattie
    Hi there, I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL). Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?" If this makes any sense, help would be much appreciated :) P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.

    Read the article

  • Unable to import example projects from book (newbie question)

    - by StayWett
    I am trying to import projects from a book "Beginning Android 2" but when I choose the root directory the "Finish" button is still grayed out. I've tried importing the project from scratch with no success, and I've also tried to create a template project with the same name and everything and then importing into that (Import existing projects into workspace) but it always says "No projects are found to import". The contents of the folder are: Res folder, src folder with .java file, the manifest, build, and default.properties files. I am new to android dev (obviously) so I appreciate the help.

    Read the article

  • JAR file folder for eclipse projects

    - by Daff
    I'm trying to create a centralized folder (in some kind of a "meta project" in my eclipse workspace) for commonly used JAR files for referenced projects in this workspace. It should work similar to the WEB-INF/lib folder for web projects but also apply to non web projects, and automatically scan and add all jar files in this folder. I tried to create a user library with these jar files and reference them in the project but I still have to add every new jar manually to the user library (and don't know if it is referenced relative of absoulute) and Tomcat (WTP) doesn't seem to take these files (Run As - Run on Server) into its classpath (and I don't want to duplicate the jars and put them into WEB-INF/lib). Any ideas?

    Read the article

  • I need to create a very large array of bits/boolean values. How would I do this in C/C++?

    - by Eddy
    Is it even possible to create an array of bits with more than 100000000 elements? If it is, how would I go about doing this? I know that for a char array I can do this: char* array; array = (char*)malloc(100000000 * sizeof(char)); If I was to declare the array by char array[100000000] then I would get a segmentation fault, since the maximum number of elements has been exceeded, which is why I use malloc. Is there something similar I can do for an array of bits?

    Read the article

  • No response in Eclipse: File ->Import->Existing Projects into Workspace

    - by Hula
    I'm trying to import one of the GWT samples into Eclipse by following the instructions below. But when I browse to the directory containing the "Hello" sample and uncheck "Copy projects into workspace", the Finish button is grayed out, preventing me from completing the import. Any ideas why? -- Option A: Import your project into Eclipse (recommended) -- If you use Eclipse, you can simply import the generated project into Eclipse. We've tested against Eclipse 3.3 and 3.4. Later versions will likely also work, earlier versions may not. In Eclipse, go to the File menu and choose: File - Import... - Existing Projects into Workspace Browse to the directory containing this file, select "Hello". Be sure to uncheck "Copy projects into workspace" if it is checked. Click Finish.

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • Project Server 2010 Beta - can't connect Project client to projects hosted on server

    - by Chris W
    We have a Project 2010 Beta installed on SharePoint 2010 Beta as test set-up whilst we wait for the release versions. Whilst the installation seemed to complete without any issues we're unable to open any projects within the Project 2010 client app. Project pops up an error 'Could not retrieve server initialization data'. My local event logs list some errors from MSSOAP that simply state an unanticipated error occurred during the processing of the request. The server doesn't log any errors. The Sharepoint set up is a farm containing 3 SharePoint servers. I log in to server 'PORTAL' but the Project Server stuff is configured to run on one of the other SharePoint boxes. I presume others have managed to get this working - has anyone got any ideas as to what could be wrong. Everything is patched correctly as far as I can tell.

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Large svn external

    - by MPelletier
    I have a project which uses a large library residing in its own repository. Using: Tortoise-SVN, the server is running an enterprise edition of VisualSVN The project itself has the "standard" structure: trunk tags branches In each branch, tag, and trunk is the library, set as an external (svn:external property). If I get the entire tree, I get the library several times, which is just getting too ridiculously repetitive. Is there a recommended structure for this? Or perhaps a way not to get all externals (because other externals are much smaller, easier to manipulate)?

    Read the article

  • Will increasing RAM improve Lightroom 3 large tiff loading times

    - by andy
    Set up: mid 2009 17" unibody MacBook Pro 4GB RAM 2.66 Core 2 Duo Snow Leopard 10.6.6 Lightroom 3 When working with 12 MegaPixel RAW files from a Nikon D700, no problem. Lightroom is fine. Recently I've been scanning film and they result in large tiff files, about 130mb each. The tiff files themselves are good, and I'm happy with my scanning workflow. Working with these files in Lightroom is perfectly fine, except for one step. When I choose one of these photos in the Develop module, Lightroom displays the "Loading" on the image for about a minute or two, which is quite long. Once the image is loaded, then everything is fine again, and applying effects is instant. So my only issue is reducing that "loading" time in the develop module (the library module is fine too). Will increasing my RAM to 8GB help? I'm worried about spending the money and it not making any difference. thanks andy

    Read the article

  • Booting large ISO through PXE

    - by Devator
    I currently have a FOG server (which works perfectly fine) and I'm trying to boot Windows 7 through it (with memdisk). But, since the ISO is rather large (more than 6 GB) it will try to put the ISO into memory and then boot however it crashes with the error message not enough memory to load specified image. The systems here don't have 6 GB of RAM so I need another way to boot it. I am aware of WDS and SCCM, however I want todo this with FOG. Is there any way to boot the ISO and install Windows through FOG?

    Read the article

  • Good/Better config for MySQL on an EC2 Large Instance

    - by Tim Reynolds
    I have an EC2 Large instance dedicated to MySQL. It will be serving a Joomla/Magento combo so it has a blend of InnoDB and MyISAM tables. I have only worked with MyISAM in the past and am therefore unfamiliar with the settings InnoDB uses. Experiments so far have been less than fruitful, as I keep causing the InnoDB engine to be disabled. My instance is running Ubuntu 10.04 64 bit server edition and has ~7.5G of ram. MySQL is currently using ~0.6% of that, with somewhat poor performance. I would like to configure it to use as much of the system RAM as is reasonable. Testing some settings I learned that the InnoDB logs can't collectively be larger than 4G. Would anyone be able to provide some base InnoDB and MyISAM settings to get my started. Thank you Tim

    Read the article

  • git/gitolite: big git repo with several mini projects

    - by Jay
    I'm pretty new to the whole version control thing, and even more so with git. I recently installed git on my computer(s) and set it up on a NAS server. However, I have several client folders with several project folders per client folder. Each one of these client folders is a giant repo, encompassing every project inside it. What I'm wondering is, is there a way to break this apart? So, for instance: The NAS is my 'origin', and has gitolite installed On computer1 I have every project folder in a client folder ever created (clean branch), In computer2 I do not a new checkout of the client branch (because all the projects in that branch are all completed and I don't need a working copy of it), but I do have a brand new project folder for that client "newproject". Is there a way to commit and push to the NAS repo from computer2? Or perhaps is there a better way of organizing all this?

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

  • Listing side projects in a jr. sysadmin resume

    - by Beaming Mel-Bin
    I have many "side-projects" that were not part of my past jobs. Just for example: Configuring web site environment for professors and friends Configuring a Linux box that does the routing, firewall (iptables), backup and file sharing (samba) for my apartment Developing small websites for things as simple as party invites to polling friends. Running my own SMTP server with domain keys, SPF and DNSBL Etc., etc. What would be the appropriate section to mention this? Should I even mention it? Perhaps it's best to just bring it up during the interview. I would especially appreciate the opinion of hiring managers.

    Read the article

  • Large Users Profile - Windows 7 - Machine running slowly

    - by Richard
    I have the MD of a client of ours who has a Windows 7 Profile that is currently 14GB thanks to Videos/Music and Documents. The first thing we did was to switch from roaming to local. What I need to know is now the profile is local am I wasting my time by reducing it any further? Does it really make a difference to performance having a large local user profile? Only the 4GB outlook ost that talks to the network frequently. Thanks in advance.... Richard

    Read the article

  • secure synchronization of large amount of data

    - by goncalopp
    I need to automatically mirror a large amount (terabytes) of files in two unix machines over a slow link (1 Mbps). This needs to be done frequently, but the data doesn't change too much (delta transmission doesn't saturate the link). The usual solution would be rsync, but there's an additional requirement: it's undesirable, from a security standpoint, that either the source or destination machines have (keyless) ssh keys to each other, or any kind of filesystem access. All communication between the two machines should thus be initialized (and mediated) through a third machine. I've asked a separate question about rsync in particular here. Are there other obvious solutions I'm missing?

    Read the article

  • Open source command line tools for indexing a large number of text files

    - by ergosys
    I'm looking for any open source command line tool or tools which will allow me to index and search a large number of plain text files. Approximate search would be a plus. The tool only needs to print the files that match, although some match context would be useful. A GUI tool isn't useful for my application, nor is anything that searches files one by one (grep for example). I'm basically targeting unix platforms (osx, linux, bsd). EDIT: I'm not interested in any sort of tool that is system-wide, or needs to run in the background. Basically, I want to build an index for a directory tree full of text files and then later be able to search against it. Preferably the index is one or a few files that I can specify the location of. Any ideas?

    Read the article

  • How to Shrink large Hyper-V VM

    - by autrevo
    Using Disk2VHD utility I converted my bare-metal OS into Hyper-V VHD - http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx And I could obtain a huge 190GB VHD file. Apart from performance issues, this VHD worked fine as guest when hosted on Windows Server 200 R2, Hyper-V. Having realized need to keeping only system files and application installations on VHD. I have deleted most of the junk data from this VHD and now it contains only 20-25 GB. But I am not able to shrink the VHD VM. Having done some research, I came to know, this as a limitation of .VHD files. Subsequently I followed these two step using Edit Virtual Hard Wizard on Windows 2012 Box. Convert from VHD to VHDX (took close to 3 hrs.) Compact (Another 4 hrs.) This did not ever shrink the VHDX either. Does Hyper-V does not provide proper support to handle large VHDs or VHDXs whose size are the range of 200GB.

    Read the article

  • How To Speed Up Adding Column To Large Table In Sql Server

    - by Chris
    I want to add a column to a Sql Server table with about 10M rows. I think this query would eventually finish adding the column I want: alter table T add mycol bit not null default 0 but it's been going for several hours already. Is there any shortcut to get a "not null default 0" column inserted into a large table? Or is this inherently really slow? This is Sql Server 2000. Later on I have to do something similar on Sql Server 2008.

    Read the article

  • django wsgi multiple projects different url same apache server

    - by Thomas Schultz
    Hello, I'm trying to get 2 separate django projects running on the same apache server with mod_wsgi that are also under the same domain but different urls. Like www.example.com/site1/ and www.example.com/site2 What I'm trying to do is something like... <VirtualHost *:80> ServerName www.example.com <location "/site1/"> DocumentRoot "/var/www/html/site1" WSGIScriptAlias / /var/www/html/site1/django.wsgi </location> <location "/site2/"> DocumentRoot "/var/www/html/site2" WSGIScriptAlias / /var/www/html/site2/django.wsgi </location> </VirtualHost> The closes thing I've seen is this http://docs.djangoproject.com/en/dev/howto/deployment/modpython/ but "mysite" is different for both of these cases and they're using mod_python instead of mod_wsgi. Any help with this would be great thanks!

    Read the article

  • Integrate Python Projects Into Xcode

    - by Vynile
    Hi! I'm a Mac user, and one of my hobbies is programming. I use Xcode, the integrated IDE of Mac OS X. I started to learn Python programming langage, and I want to use Xcode for developing my scripts. I searched for weeks in the internet, but I didn't find something interesting. Firstly, I want to update the integrated interpreter of Mac OS X, that is on 2.6 version. And secondly, I want to create a Python project on Xcode easily, like I do with C & C++ projects. Can you help me? I really need help! Cordially.

    Read the article

  • Storing large amounts of small files into bigger files on Windows

    - by asmo
    Let's say I have 50 GiB of files that weights around 500 KiB each. My guess is that having, for example, 5 large files of 10 GiB each with the same content archived in them would be better for hard drive performance. Am I correct? Will there be a noticeable gain on an NTFS filesystem? ===================================================================== Finally, which tool could I use to group the files together while retaining the ability to modify the content of the archive with zero or minor performance loss? For example, I like TrueCrypt archiving because after mounting an archive file, it creates a drive which I can use seamlessly as if it was a normal drive. The only thing with TrueCrypt is that I don't need encryption/compression, only archiving.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >