Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 685/785 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • How to build Android for Samsung Galaxy Note

    - by Tr?n Ð?i
    I'd like to modify and build my own Android for my Samsung Galaxy Note I've downloaded Android 4.1.2 from http://source.android.com and Samsung open source for my Samsung Galaxy Note. After extract Samsung open source, I get 2 folders: Kernel and Platform, and 2 README text file README_Kernel.txt 1. How to Build - get Toolchain From android git server , codesourcery and etc .. - arm-eabi-4.6 - edit build_kernel.sh edit "CROSS_COMPILE" to right toolchain path(You downloaded). EX) CROSS_COMPILE= $(android platform directory you download)/android/prebuilts/gcc/linux-x86/arm/arm-eabi-4.6/bin/arm-eabi- Ex) CROSS_COMPILE=/usr/local/toolchain/arm-eabi-4.6/bin/arm-eabi- // check the location of toolchain - execute Kernel script $ ./build_kernel.sh 2. Output files - Kernel : arch/arm/boot/zImage - module : drivers/*/*.ko 3. How to Clean $ make clean README_Platform.txt [Step to build] 1. Get android open source. : version info - Android 4.1 ( Download site : http://source.android.com ) 2. Copy module that you want to build - to original android open source If same module exist in android open source, you should replace it. (no overwrite) # It is possible to build all modules at once. 3. You should add module name to 'PRODUCT_PACKAGES' in 'build\target\product\core.mk' as following case. case 1) bluetooth : should add 'audio.a2dp.default' to PRODUCT_PACKAGES case 2) e2fsprog : should add 'e2fsck' to PRODUCT_PACKAGES case 3) libexifa : should add 'libexifa' to PRODUCT_PACKAGES case 4) libjpega : should add 'libjpega' to PRODUCT_PACKAGES case 5) KeyUtils : should add 'libkeyutils' to PRODUCT_PACKAGES case 6) bluetoothtest\bcm_dut : should add 'bcm_dut' to PRODUCT_PACKAGES ex.) [build\target\product\core.mk] - add all module name for case 1 ~ 6 at once PRODUCT_PACKAGES += \ e2fsck \ libexifa \ libjpega \ libkeyutils \ bcm_dut \ audio.a2dp.default 4. In case of 'bluetooth', you should add following text in 'build\target\board\generic\BoardConfig.mk' BOARD_HAVE_BLUETOOTH := true BOARD_HAVE_BLUETOOTH_BCM := true 5. excute build command ./build.sh user What I need to do after followed 2 above files

    Read the article

  • Why isn't module_filters filetering Mail::IspMailGate in CPAN::Mini?

    - by user304122
    Edited - Ummm - now have a module in schwigon giving same problem !! I am on a corporate PC that forces mcshield on everything that moves. I get blocked when trying to mirror on ... authors/id/J/JV/JV/EekBoek-2.00.01.tar.gz ... updated authors/id/J/JV/JV/CHECKSUMS ... updated Could not stat tmpfile '/cygdrive/t/cpan_mirror/authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz-4712': No such file or directory at /usr/lib/perl5/site_perl/5.10/LWP/UserAgent.pm line 851. authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz At this point, I get the Virus scanner mcshield sticking its Awr in. To maintain my mirror I execute:- #!/usr/bin/perl CPAN::Mini->update_mirror( remote => "http://mirror.eunet.fi/CPAN", local => "/cygdrive/t/cpan_mirror/", trace => 1, errors => 1, module_filters => [ qr/kjkjhkjhkjkj/i, qr/clamav/i, qr/ispmailgate/i, qr/IspMailGate/, qr/Mail-IspMailGate/, qr/mail-ispmailgate/i, ], path_filters => [ qr/ZZYYZZ/, #qr/WIED/, #qr/RJBS/, ] ); It skips OK if I enable the path_filter WIED. Just cannot get it to skip the module failing module to complete other WIED modules. Any ideas?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Why can't perfmon see instances of my custom performance counter?

    - by spoulson
    I'm creating some custom performance counters for an application. I wrote a simple C# tool to create the categories and counters. For example, the code snippet below is basically what I'm running. Then, I run a separate app that endlessly refreshes the raw value of the counter. While that runs, the counter and dummy instance are seen locally in perfmon. The problem I'm having is that the monitoring system we use can't see the instances in the multi-instance counter I've created when viewing remotely from another server. When using perfmon to browse the counters, I can see the category and counters, but the instances box is grayed out and I can't even select "All instances", nor can I click "Add". Using other access methods, like [typeperf][1] exhibit similar issues. I'm not sure if this is a server or code issue. This is only reproducible in the production environment where I need it. On my desktop and development servers, it works great. I'm a local admin on all servers. CounterCreationDataCollection collection = new CounterCreationDataCollection(); var category_name = "My Application"; var counter_name = "My counter name"; CounterCreationData ccd = new CounterCreationData(); ccd.CounterType = PerformanceCounterType.RateOfCountsPerSecond64; ccd.CounterName = counter_name; ccd.CounterHelp = counter_name; collection.Add(ccd); PerformanceCounterCategory.Create(category_name, category_name, PerformanceCounterCategoryType.MultiInstance, collection); Then, in a separate app, I run this to generate dummy instance data: var pc = new PerformanceCounter(category_name, counter_name, instance_name, false); while (true) { pc.RawValue = 0; Thread.Sleep(1000); }

    Read the article

  • C# WinForms. Multiple Forms in separate threads

    - by Calum Murray
    I'm trying to run an ATM Simulation in C# with Windows Forms that can have more than one instance of an ATM machine transacting with a bank account simultaneously. The idea is to use semaphores/locking to block critical code that may lead to race conditions. My question is this: How can I run two Forms simultaneously on separate threads? In particular, how does all of this fit in with the Application.Run() that's already there? Here's my main class: public class Bank { private Account[] ac = new Account[3]; private ATM atm; public Bank() { ac[0] = new Account(300, 1111, 111111); ac[1] = new Account(750, 2222, 222222); ac[2] = new Account(3000, 3333, 333333); Application.Run(new ATM(ac)); } static void Main(string[] args) { new Bank(); } } ...that I want to run two of these forms on separate threads... public partial class ATM : Form { //local reference to the array of accounts private Account[] ac; //this is a reference to the account that is being used private Account activeAccount = null; private static int stepCount = 0; private string buffer = ""; // the ATM constructor takes an array of account objects as a reference public ATM(Account[] ac) { InitializeComponent(); //Sets up Form ATM GUI in ATM.Designer.cs this.ac = ac; } ... I've tried using Thread ATM2 = new Thread(new ThreadStart(/*What goes in here?*/)); But what method do I put in the ThreadStart constructor, since the ATM form is event-driven and there's no one method controlling it? Thanks, Calum

    Read the article

  • gevent install on x86_64 fails: "undefined symbol: evhttp_accept_socket"

    - by digitala
    I'm trying to install gevent on a fresh EC2 CentOS 5.3 64-bit system. Since the libevent version available in yum was too old for another package (beanstalkd) I compiled/installed libevent-1.4.13-stable manually using the following command: ./configure --prefix=/usr && make && make install This is the output from installing gevent: [gevent-0.12.2]# python setup.py build --libevent /usr/lib Using libevent 1.4.13-stable: libevent.so running build running build_py running build_ext Linking /usr/src/gevent-0.12.2/build/lib.linux-x86_64-2.6/gevent/core.so to /usr/src/gevent-0.12.2/gevent/core.so [gevent-0.12.2]# cd /path/to/my/project [project]# python myscript.py Traceback (most recent call last): File "myscript.py", line 9, in <module> from gevent.wsgi import WSGIServer as GeventServer File "/usr/lib/python2.6/site-packages/gevent/__init__.py", line 32, in <module> from gevent.core import reinit ImportError: /usr/lib/python2.6/site-packages/gevent/core.so: undefined symbol: evhttp_accept_socket I've followed exactly the same steps on a local VirtualBox instance (32-bit) and I'm not seeing any errors. How would I fix this?

    Read the article

  • AS3 httpservice - pass arguments to event handlers by reference

    - by Shawn Simon
    I have this code: var service:HTTPService = new HTTPService(); if (search.Location && search.Location.length > 0 && chkLocalSearch.selected) { service.url = 'http://ajax.googleapis.com/ajax/services/search/local'; service.request.q = search.Keyword; service.request.near = search.Location; } else { service.url = 'http://ajax.googleapis.com/ajax/services/search/web'; service.request.q = search.Keyword + " " + search.Location; } service.request.v = '1.0'; service.resultFormat = 'text'; service.addEventListener(ResultEvent.RESULT, onServerResponse); service.send(); I want to pass the search object to the result method (onServerResponse) but if I do it in a closure it gets passed by value. Is there anyway to do it by reference without searching through my array of search objects for the value returned in the result? Sorry, this is simple but it's been a long day...

    Read the article

  • emacs: x-popup-menu max size constraints?

    - by Cheeso
    I'm working on an intellisense or code-completion capability for C#. So far, so good. Right now I have basic completion working. There are 2 ways to request completion. The first cycles through all the potential matches. The second presents a popup menu of the matches. It works for types: And also for local variables: I'm confronting two problems with x-popup-menu: the popup menu can expand to consume all available screen space, when the number of choices is large. Literally it can obscure the entire screen. The silly thing is, it's scrollable. First it expands to consume all available space, then it also becomes scrollable. Is there a way I can limit the maximum size of x-popup-menu? To specify the position of the popup menu, I pass in a position, and x-popup-menu uses that as the *middle*, not the left, of the top line of the menu. Why middle? who knows. What this means is, if I specify (40 . 60) for the location of the menu, and the menu happens to be 100 pixels wide, the menu will extend beyond the left border of the emacs window. You can see this in the 2nd image above. If I knew how wide the popup would be before specifying the position, I could compensate. But I don't. Is there a workaround? Is there a way to get x-popup-menu to take its position as the LEFT rather than the middle.

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • How can I open VLC via browser with PHP (Mac OS X)

    - by Damiqib
    I'm trying to open VLC via browser and make it instantly play the given video file on Mac OS X. This runs on my local server and is only meant to run locally - therefore I already run apache (MAMP) with my username and with group "staff" (defined in httpd.conf). YES - I do know that VLC has http interface - however that is not what I need, so do not suggest that... My current system works without any problems when I run it via Terminal: php /var/www/Movies/index.php - This leads to VLC opening and video starts playing fullscreen like intented. Problems start when I run the same PHP-page with browser. Then the VLC-process starts, but there's no GUI for it, video file won't start playing and the VLC-process takes nearly 100% of CPU. Both; terminal and browser started VLC-processes run with the same user (mine) Both have "Parent process" bash VLC-process begun with Terminal has empty "Process group" (only process id-number) and browser started has "httpd" + (id-number) VLC-process started via browser makes 1000-times more "Mach System Calls" than it's Terminal-started counterpart. Could anyone give me any pointers on how to get this thing working? index.php # $j is a file path to the videofile and is defined before exec('/var/www/Movies/vlc.sh "' . $j . '" > /dev/null 2>&1 & echo $!;'); # If I do this in the given PHP-page it tells me that apache is running # with my username and with the group "staff" like it should be... exec('whoamI'); vlc.sh #!/bin/bash # Activate VLC in 5 seconds to make it the front-most window (sleep 5; open -a VLC) & # Open video file /Applications/VLC.app/Contents/MacOS/VLC --quiet --fullscreen "$1"

    Read the article

  • Unable to upload large files on FTP using Apache commons-net-3.1

    - by Nitin
    I am trying to upload the one large file ( more than 8 MB) using storeFile(remote, local) method of FTPClient but it results false.It get uploaded with some extra bytes.Following is the code with Output: public class Main { public static void main(String[] args) { FTPClient client = new FTPClient(); FileInputStream fis = null; try { client.connect("208.106.181.143"); client.setFileTransferMode(client.BINARY_FILE_TYPE); client.login("abc", "java"); int reply = client.getReplyCode(); System.out.println("Received Reply from FTP Connection:" + reply); if(FTPReply.isPositiveCompletion(reply)){ System.out.println("Connected Success"); } client.changeWorkingDirectory("/"+"Everbest"+"/"); client.makeDirectory("ETPSupplyChain5.3-EvbstSP3"); client.changeWorkingDirectory("/"+"Everbest"+"/"+"ETPSupplyChain5.3-EvbstSP3"+"/"); FTPFile[] names = client.listFiles(); String filename = "E:\\Nitin\\D-Drive\\Installer.rar"; fis = new FileInputStream(filename); boolean result = client.storeFile("Installer.rar", fis); int replyAfterupload = client.getReplyCode(); System.out.println("Received Reply from FTP Connection replyAfterupload:" + replyAfterupload); System.out.println("result:"+result); for (FTPFile name : names) { System.out.println("Name = " + name); } client.logout(); fis.close(); client.disconnect(); } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } o/p: Received Reply from FTP Connection:230 Connected Success 32 /Everbest/ETPSupplyChain5.3-EvbstSP3 Received Reply from FTP Connection replyAfterupload:150 result:false

    Read the article

  • ASP.NET MVC 2 DisplayFor()

    - by ZombieSheep
    I'm looking at the new version of ASP.NET MVC (see here for more details if you haven't seen it already) and I'm having some pretty basic trouble displaying the content of an object. In my control I have an object of type "Person", which I am passing to the view in ViewData.Model. All is well so far, and I can extact the object in the view ready for display. What I don't get, though, is how I need to call the Html.DisplayFor() method in order to get the data to screen. I've tried the following... <% MVC2test.Models.Person p = ViewData.Model as MVC2test.Models.Person; %> // snip <%= Html.DisplayFor(p => p) %> but I get the following message: CS0136: A local variable named 'p' cannot be declared in this scope because it would give a different meaning to 'p', which is already used in a 'parent or current' scope to denote something else I know this is not what I should be doing - I know that redefining a variable will producte this error, but I don't know how to access the object from the controller. So my question is, how do I pass the object to the view in order to display its properties? (I should add that I am reading up on this in my limited spare time, so it is entirely possible I have missed something fundamental) TIA

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • encrypt apache and mysql servers

    - by stormdrain
    I have a question about encrypting disks. I have 2 servers: 1 is apache for web/frontend and it talks to server 2 which is mysql. They are all for intranet only; no external access. I was looking into using PGP or GnuPG to encrypt the disks. I'm not clear, though, as to exactly how this would work. Where would the keys be stored? On the client? On apache? If there is a key on apache to access mysql, does there need to be a key for each user? If so, if key 1 is used to alter some data, would then that data be inaccessible to a user using key 2? And the apache key, would that only be accessible to users with local keys? Is encryption done on the fly? Does it degrade performance? What would be the best approach to encrypt the data on these servers, but have them accessible to users? Thanks!

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Javascript/PHP and timezones

    - by James
    Hi, I'd like to be able to guess the user's timezone offset and whether or not daylight savings is being applied. Currently, the most definitive code that I've found for this is here: http://www.michaelapproved.com/articles/daylight-saving-time-dst-detect/ So this gives me the offset along with the DST indicator. Now, I want to use these in my PHP scripts in order to ouput the local date/time for the user....but what's best for this? I figure I have 2 options: a) Pick a random timezone which has the same offset and DST setting from the output of timezone_abbreviations_list(). Then call date_timezone_set() with this in order to apply the correct treatment to the time. b) Continue treating the date as UTC but just do some timestamp addition to add the appropriate number of hours on. My feeling is that option B is the best way. The reason for this is that with A, I could be using a timezone which although correct in terms of offset/dst, may have some obscur rules in place behind the scene that could give surprising results (I don't know of any but nonetheless I don't think I can rule it out). I'd then re-check the timezone using Javascript at the start of each session in order to capture when either the user's timezone changes (very unlikely) or they pass in to the DST period. Sorry for the brain dump - I'm really just after some sort of reassurance that the approaches above are valid. Thanks, James.

    Read the article

  • embed video object in html

    - by kc rajput
    Hi I embed a video in html page with swf file. that is running on local host but when i run this on live server. than it dosent work properly. I link flv video in swf file and embed it in html. <script type="text/javascript"> AC_FL_RunContent( 'codebase','http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,28,0','width','600','height','338','title','testing','src','Edit_video/9vi/home-page2','quality','high','pluginspage','http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash','movie','Edit_video/9vi/home-page2' ); //end AC code </script><noscript><object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,28,0" width="600" height="338" title="testing"> <param name="movie" value="Edit_video/9vi/home-page2.swf" /> <param name="quality" value="high" /> <embed src="Edit_video/home-page2.swf" quality="high" pluginspage="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash" type="application/x-shockwave-flash" width="600" height="338"></embed> </object></noscript>

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Azure : The process cannot access the file "" because it is being used by another process.

    - by Shantanu
    Hi all, I am trying to get a matlab-compiled exe running on Azure cloud, and for that purpose need to get a v78.zip onto the local storage of the cloud and unzip it, before I can try to run an exe on the cloud. The program works fine when executed locally, but on deployment gives and error at line marked below in the code. The error is : The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. Exception Details: System.IO.IOException: The process cannot access the file 'C:\Resources\directory\cc0a20f5c1314f299ade4973ff1f4cad.WebRole.LocalStorage1\v78.zip' because it is being used by another process. The code is given below: string localPath = RoleEnvironment.GetLocalResource("LocalStorage1").RootPath; Response.Write(localPath + " \n"); Directory.SetCurrentDirectory(localPath); CloudBlob mblob = GetProgramContainer().GetBlobReference("v78.zip"); CloudBlockBlob mbblob = mblob.ToBlockBlob; CloudBlob zipblob = GetProgramContainer().GetBlobReference("7z.exe"); string zipPath = Path.Combine(localPath, "7z.exe"); string matlabPath = Path.Combine(localPath, "v78.zip"); IEnumerable<ListBlockItem> blocklist = mbblob.DownloadBlockList(); BlobStream stream = mbblob.OpenRead(); FileStream fs = File.Create(matlabPath); (Exception occurs here) It'll be great help if someone could tell me where I'm going wrong. Thanks! Shan

    Read the article

  • Deployments and TFS, general questions

    - by Velika
    SOX requires that we have a separate group deploy our ASP.NET web to production. Currently, that group has access to our current code repository in VSS and uses VSS to deploy code that has been checked into VSS. How are deployments typically done for web applications? As a developer, I have used the Deploy function in Visual Studio to deploy code to a network share which corresponds to a IS virtual folder, but I don't think we can expect that the deployment group will be purchasing a copy of Visual Studio just to do deployments. We could check the code into TFS, but what is the minimum software that that group would need to perform the deployment? Would a Team Explorer Client Access suffice? I am aware that Team System has functionality to automate the building of an application. Do people typically deploy to Production by copying aspx and dlls files from the QA environment to production or do you normally deploy from TFS or even VS directly? It seems to me that the preferred approach would be to deploy from the QA environment, since that is the environment that must have been approved for release or that those files should be checked into TFS and the deployed from TFS, assuming you can deploy from TFS. What confuses me is whether bin (binary) files that are local to the project-do they go into TFS? Is so, doesn't this create problems for other developers in that only 1 developers-the one with the binary checked - can actually debug because debugging requires write access to the binaries? Does this mean that the binaries shouldn't be checked into TFS? But eventually, if you deploy from TFS, the binaries HAVE to be added to TFS. Are they added as a separate (compiled) application node? If so,m this sounds real ugly. I would assume not. How does one ensure that the binaries match the source code that we mark with a particular version number? Obviously, I'm clueless. Can someone give me a general idea of how you handle version control and deployments in particular using TFS?

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >