Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 70/1524 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • PHP Multiple Calls to Server Share Objects?

    - by user1513171
    I’m wondering this about PHP on Apache. Do multiple calls to the server from different users—could be sitting next to each other, in different states, different countries, etc…—share memory? For example, if I create a static variable in a PHP script and set it to 1 by default, then user1 comes in and it changes to 2, and then almost at the exactly same time, user2 comes in, does he see that static variable with a value of 1 or 2? An even better example is this class I have in PHP: class ApplicationRegistry { private static $instance; private static $PDO; private function __construct() { self::$PDO = $db = new \PDO('mysql:unix_socket=/........'); self::$PDO->setAttribute(\PDO::ATTR_ERRMODE, \PDO::ERRMODE_EXCEPTION); } static function instance() { if(!isset(self::$instance)) { self::$instance = new self(); } return self::$instance; } static function getDSN() { if(!isset(self::$PDO)) { self::instance(); return self::$PDO; } return self::$PDO; } } So this is a Singleton that has a static PDO instance. If user1 and user2 are hitting the server at the exact same time are they using different instances of PDO or are they using the same one? This is a confusing concept for me and I'm trying to think of how my application will scale.

    Read the article

  • scaling svg paths in Raphael 2.1

    - by user1229001
    I'm using SVG paths from a wikimedia commons map of the US. I've singled out Pennsylvania with its counties. I'm feeding the paths out of a database and using Raphael 2.1 to put them on the page. Because in the original map, Pennsylvania was so small and set at an angle, I'd like to scale up the paths and rotate Pa. so that it isn't on an angle. When I try to use Raphael's transform method, all the counties look strange and overlapped. I gave up on setting the viewBox when I heard that it doesn't work in all browsers. Anyone have any ideas? Here is my code: $(document).ready(function() { var $paths = []; //array of paths var $thisPath; //variable to hold whichever path we're drawing $.post('getmapdata.php', function(data){ var objData = jQuery.parseJSON(data); for (var i=0; i<objData.length; i++) { $paths.push(objData[i].path); //$counties.push(objData[i].name); }//end for drawMap($paths); }) function drawMap(data) { // var map = new Raphael(document.getElementById('map_div_id'),0, 0); var map = new Raphael(0, 0, 520, 320); //map.setViewBox(0,0,500,309, true); for (var i = 0; i < data.length; i++) { var thisPath = map.path(data[i]); thisPath.transform(s2); thisPath.attr({stroke:"#FFFFFF", fill:"#CBCBCB","stroke-width":"0.5"}); } //end cycling through i }//end drawMap });//end program

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • How to avoid having very large objects with Domain Driven Design

    - by Pablojim
    We are following Domain Driven Design for the implementation of a large website. However by putting the behaviour on the domain objects we are ending up with some very large classes. For example on our WebsiteUser object, we have many many methods - e.g. dealing with passwords, order history, refunds, customer segmentation. All of these methods are directly related to the user. Many of these methods delegate internally to other child object but this still results in some very large classes. I'm keen to avoid exposing lots of child objects e.g. user.getOrderHistory().getLatestOrder(). What other strategies can be used to avoid this problems?

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • UITableView scrolling for very large cells

    - by Mike
    I have UITableView with very large cells with lots of content (more than one screen height). I need to scroll UITableView to a certain position within those cells. I've found method scrollToRowAtIndexPath:atScrollPosition:animated: which works just fine if you got small cells (less than one screen), then you can just command UITableView to scroll, so the needed cell appears at the top of the screen (for example). But this doesn't help at all when you got very large cells. I need UITableView to scroll to a certain position within my large cell, something like scrollToRowAtIndexPath but what accepts pixel offset in addition to cell number. Someone got any ideas? Or maybe ready solution... Would appreciate any help.

    Read the article

  • MPI Large Data all to all transfer

    - by csslayer
    My application of MPI has some process that generate some large data. Say we have N+1 process (one for master control, others are workers), each of worker processes generate large data, which is now simply write to normal file, named file1, file2, ..., fileN. The size of each file may be quite different. Now I need to send all fileM to rank M process to do the next job, So it's just like all to all data transfer. My problem is how should I use MPI API to send these files efficiently? I used to use windows share folder to transfer these before, but I think it's not a good idea. I have think about MPI_file and MPI_All_to_all, but these functions seems not to be so suitable for my case. Simple MPI_Send and MPI_Recv seems hard to be used because every process need to transfer large data, and I don't want to use distributed file system for now.

    Read the article

  • Controlling access to large files in Apache

    - by obeattie
    Hi there, I am looking to control access to some large files (we're talking many GB here) by the use of signed URLs. The files are currently restricted by LDAP Basic authentication (mod_auth_ldap), but I need to change this to verify the signature (passed as a query parameter in the URL). Basically, I just need to run a script to verify the signature, and allow the request to proceed as if authentication had succeeded. My initial thought to this was just to use a simple CGI script, but as the files are so large I'm concerned about performance. So, really, this question is (probably) more like "are there any performance implications of streaming large files from a CGI script via Apache?"… and if so, "is there a better way of doing this (short of writing a dedicated authentication module)?" If this makes any sense, help would be much appreciated :) P.S. I wasn't sure exactly what to search for for this (10 minutes of Googling were fruitless), so I may very well be duplicating someone else's post.

    Read the article

  • Is it a good idea to apply some basic macros to simplify code in a large project?

    - by DoctorT
    I've been working on a foundational c++ library for some time now, and there are a variety of ideas I've had that could really simplify the code writing and managing process. One of these is the concept of introducing some macros to help simplify statements that appear very often, but are a bit more complicated than should be necessary. For example, I've come up with this basic macro to simplify the most common type of for loop: #define loop(v,n) for(unsigned long v=0; v<n; ++v) This would enable you to replace those clunky for loops you see so much of: for (int i = 0, i < max_things; i++) With something much easier to write, and even slightly more efficient: loop (i, max_things) Is it a good idea to use conventions like this? Are there any problems you might run into with different types of compilers? Would it just be too confusing for someone unfamiliar with the macro(s)?

    Read the article

  • I need to create a very large array of bits/boolean values. How would I do this in C/C++?

    - by Eddy
    Is it even possible to create an array of bits with more than 100000000 elements? If it is, how would I go about doing this? I know that for a char array I can do this: char* array; array = (char*)malloc(100000000 * sizeof(char)); If I was to declare the array by char array[100000000] then I would get a segmentation fault, since the maximum number of elements has been exceeded, which is why I use malloc. Is there something similar I can do for an array of bits?

    Read the article

  • Basic Application Organization + Publishing (.NET 4.0)

    - by keynesiancross
    Hi all, I'm trying to figure out the best way to keep my program organized. Currently I have many class files in one project file, but some of these classes do things that are very different, and some I would like to expose to other applications in the future. One thought I had to organizing my application was to create multiple project files, with one "Main" project, which would interact with all the other projects and their relevant classes as needed. Does this make sense? I was wondering if anyone had any suggestions in regards to using multiple project files in one solution (and how do you create something like this?), and if it makes sense to have multiple namespaces in one solution... Cheers ----Edit Below---- Sorry, my fault. Currently my program is all in one console project. Within this project I have several classes, some of which basically launch a BackgroundWorker and run an endless loop pulling data. The BackgroundWorker then passes this data back to the main business logic as needed. I'm hoping to seperate this data pull material (including the background worker material) into one project file, and the rest of the business logic into another project file. The projects will have to pass objects between eachother though (the data to the main business logic, and the business logic will pass startup parameteres to the dataPull project)... Hopefully this adds a bit more detail.

    Read the article

  • Embedded Java Databases for Large Data Sets

    - by ExAmerican
    I would like to port a PHP/MySQL-based client/server application to be a standalone desktop application written in Java. The database has grown to be fairly large, with several tables with hundreds of thousands of rows. I expect these could grow to over a million entries for certain tables. What embedded database would best handle this? HSQLDB and Sqlite seem to be the obvious choices, though I'm guessing there are others out there as well. My main priorities are the ability to perform queries on large amounts of data efficiently (this thread seems to confirm Sqlite can handle this) and the ease with which I can import old data from MySQL (I remember HSQLDB being kind of a pain for that). Note: I am aware that similar questions comparing embedded databases have been posted before (for example here and here) but as my priorities differ somewhat from most applications considering the large data migration I thought it justified a new question.

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Will increasing RAM improve Lightroom 3 large tiff loading times

    - by andy
    Set up: mid 2009 17" unibody MacBook Pro 4GB RAM 2.66 Core 2 Duo Snow Leopard 10.6.6 Lightroom 3 When working with 12 MegaPixel RAW files from a Nikon D700, no problem. Lightroom is fine. Recently I've been scanning film and they result in large tiff files, about 130mb each. The tiff files themselves are good, and I'm happy with my scanning workflow. Working with these files in Lightroom is perfectly fine, except for one step. When I choose one of these photos in the Develop module, Lightroom displays the "Loading" on the image for about a minute or two, which is quite long. Once the image is loaded, then everything is fine again, and applying effects is instant. So my only issue is reducing that "loading" time in the develop module (the library module is fine too). Will increasing my RAM to 8GB help? I'm worried about spending the money and it not making any difference. thanks andy

    Read the article

  • Good/Better config for MySQL on an EC2 Large Instance

    - by Tim Reynolds
    I have an EC2 Large instance dedicated to MySQL. It will be serving a Joomla/Magento combo so it has a blend of InnoDB and MyISAM tables. I have only worked with MyISAM in the past and am therefore unfamiliar with the settings InnoDB uses. Experiments so far have been less than fruitful, as I keep causing the InnoDB engine to be disabled. My instance is running Ubuntu 10.04 64 bit server edition and has ~7.5G of ram. MySQL is currently using ~0.6% of that, with somewhat poor performance. I would like to configure it to use as much of the system RAM as is reasonable. Testing some settings I learned that the InnoDB logs can't collectively be larger than 4G. Would anyone be able to provide some base InnoDB and MyISAM settings to get my started. Thank you Tim

    Read the article

  • Booting large ISO through PXE

    - by Devator
    I currently have a FOG server (which works perfectly fine) and I'm trying to boot Windows 7 through it (with memdisk). But, since the ISO is rather large (more than 6 GB) it will try to put the ISO into memory and then boot however it crashes with the error message not enough memory to load specified image. The systems here don't have 6 GB of RAM so I need another way to boot it. I am aware of WDS and SCCM, however I want todo this with FOG. Is there any way to boot the ISO and install Windows through FOG?

    Read the article

  • Win 7 accessing large files uses 100% RAM

    - by user181276
    Running Win 7 64-bit SP1 with 8 GB RAM. I first noticed this problem when using the GUI to copy some large (5+ GB) files from one disk to another. What happens is the physical memory in use rises quite quickly to 100% and the system comes to a crawl. If I just start to access the file in a media player (it is a movie) the memory usage climbs up slowly but eventually reaches 100%. When copying the same files via XCOPY I do not have this problem. Using RAMMAP I see most of the memory usage is under "Mapped File" and is allocated under the "Active" column. If I select "Empty System Working Set" the RAM usage drops back down but then starts to climb back up. Any ideas on what I can check/test to eliminate this issue?

    Read the article

  • Classic ASP on large memory server

    - by Steve Evans
    I have a client with a large ASP app that apparently is fairly memory intensive. I’m helping them migrate to new hardware they have running Win2k8 R2. They have 4 physical servers with 32gb of RAM each. I’m making the assumption that ASP apps run as a x32 process. So I see that we have two options: On the application pool enable web gardens. Use the physical servers as VM hosts and split the box into say 4 web servers each. Any thoughts on which path will provide us better performance? I’m just not really sure how ASP will handle a machine with lots of memory, and I’m worried it won’t really be able to address the memory well. (you can ignore all the obvious stuff like increased maintenance of 16 web servers vs 4, or the flexibility virtualization gets us over physical servers, etc)

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

  • Large Users Profile - Windows 7 - Machine running slowly

    - by Richard
    I have the MD of a client of ours who has a Windows 7 Profile that is currently 14GB thanks to Videos/Music and Documents. The first thing we did was to switch from roaming to local. What I need to know is now the profile is local am I wasting my time by reducing it any further? Does it really make a difference to performance having a large local user profile? Only the 4GB outlook ost that talks to the network frequently. Thanks in advance.... Richard

    Read the article

  • secure synchronization of large amount of data

    - by goncalopp
    I need to automatically mirror a large amount (terabytes) of files in two unix machines over a slow link (1 Mbps). This needs to be done frequently, but the data doesn't change too much (delta transmission doesn't saturate the link). The usual solution would be rsync, but there's an additional requirement: it's undesirable, from a security standpoint, that either the source or destination machines have (keyless) ssh keys to each other, or any kind of filesystem access. All communication between the two machines should thus be initialized (and mediated) through a third machine. I've asked a separate question about rsync in particular here. Are there other obvious solutions I'm missing?

    Read the article

  • Open source command line tools for indexing a large number of text files

    - by ergosys
    I'm looking for any open source command line tool or tools which will allow me to index and search a large number of plain text files. Approximate search would be a plus. The tool only needs to print the files that match, although some match context would be useful. A GUI tool isn't useful for my application, nor is anything that searches files one by one (grep for example). I'm basically targeting unix platforms (osx, linux, bsd). EDIT: I'm not interested in any sort of tool that is system-wide, or needs to run in the background. Basically, I want to build an index for a directory tree full of text files and then later be able to search against it. Preferably the index is one or a few files that I can specify the location of. Any ideas?

    Read the article

  • How to Shrink large Hyper-V VM

    - by autrevo
    Using Disk2VHD utility I converted my bare-metal OS into Hyper-V VHD - http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx And I could obtain a huge 190GB VHD file. Apart from performance issues, this VHD worked fine as guest when hosted on Windows Server 200 R2, Hyper-V. Having realized need to keeping only system files and application installations on VHD. I have deleted most of the junk data from this VHD and now it contains only 20-25 GB. But I am not able to shrink the VHD VM. Having done some research, I came to know, this as a limitation of .VHD files. Subsequently I followed these two step using Edit Virtual Hard Wizard on Windows 2012 Box. Convert from VHD to VHDX (took close to 3 hrs.) Compact (Another 4 hrs.) This did not ever shrink the VHDX either. Does Hyper-V does not provide proper support to handle large VHDs or VHDXs whose size are the range of 200GB.

    Read the article

  • How To Speed Up Adding Column To Large Table In Sql Server

    - by Chris
    I want to add a column to a Sql Server table with about 10M rows. I think this query would eventually finish adding the column I want: alter table T add mycol bit not null default 0 but it's been going for several hours already. Is there any shortcut to get a "not null default 0" column inserted into a large table? Or is this inherently really slow? This is Sql Server 2000. Later on I have to do something similar on Sql Server 2008.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >