Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 685/785 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • Appending data to NSFetchedResultsController during find or create loop

    - by Justin Williams
    I have a table view that is managed by an NSFetchedResultsController. I am having an issue with a find-or-create operation, however. When the user hits the bottom of my table view, I am querying my server for another batch of content. If it doesn't exist in the local cache, we create it and store it. If it does exist, however, I want to append that data to the fetched results controller and display it. I can't quite figure that part out. Here's what I'm doing thus far: Passing the returned array of values from my server to an NSOperation to process. In the operation, create a new managed object context to work with. In the operation, I iterate through the array and execute a fetch request to see if the object exists (based on its server id). If the object doesn't exist, we create it and insert it into the operations' managed object context. After the iteration completes, we save the managed object context, which triggers a merge notification on my main thread. At this point, any objects that weren't locally cached in my Core Data store before will appear, but the ones that previously existed do not come along for the ride. I feel like it's something simple I'm missing, and could use a nudge in the right direction.

    Read the article

  • C# StripStatusText Update Issue

    - by ikurtz
    I am here due to a strange behaviour in Button_Click event. The code is attached. The issue is the first StripStatus message is never displayed. any ideas as to why? private void FireBtn_Click(object sender, EventArgs e) { // Control local controls for launching attack AwayTableLayoutPanel.Enabled = false; AwayCancelBtn.Enabled = false; FireBtn.Enabled = false; ////////////// Below statusBar message is never displayed but the folowing sound clip is. GameToolStripStatusLabel.Text = "(Home vs. Away)(Attack Coordinate: (" + GameModel.alphaCoords(GridLock.Column) + "," + GridLock.Row + "))(Action: Fire)"; //////////////////////////////////////////// if (audio) { SoundPlayer fire = new SoundPlayer(Properties.Resources.fire); fire.PlaySync(); fire.Dispose(); } // compile attack message XmlSerializer s; StringWriter w; FireGridUnit fireGridUnit = new FireGridUnit(); fireGridUnit.FireGridLocation = GridLock; s = new XmlSerializer(typeof(FireGridUnit)); w = new StringWriter(); s.Serialize(w, fireGridUnit); ////////////////////////////////////////////////////////// // send attack message GameMessage GameMessageAction = new GameMessage(); GameMessageAction.gameAction = GameMessage.GameAction.FireAttack; GameMessageAction.statusMessage = w.ToString(); s = new XmlSerializer(typeof(GameMessage)); w = new StringWriter(); s.Serialize(w, GameMessageAction); SendGameMsg(w.ToString()); GameToolStripStatusLabel.Text = "(Home vs. Away)(Attack Coordinate: (" + GameModel.alphaCoords(GridLock.Column) + "," + GridLock.Row + "))(Action: Awaiting Fire Result)"; } EDIT: if I put in a messageBox after the StripStatus message the status is updated.

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • How can I open VLC via browser with PHP (Mac OS X)

    - by Damiqib
    I'm trying to open VLC via browser and make it instantly play the given video file on Mac OS X. This runs on my local server and is only meant to run locally - therefore I already run apache (MAMP) with my username and with group "staff" (defined in httpd.conf). YES - I do know that VLC has http interface - however that is not what I need, so do not suggest that... My current system works without any problems when I run it via Terminal: php /var/www/Movies/index.php - This leads to VLC opening and video starts playing fullscreen like intented. Problems start when I run the same PHP-page with browser. Then the VLC-process starts, but there's no GUI for it, video file won't start playing and the VLC-process takes nearly 100% of CPU. Both; terminal and browser started VLC-processes run with the same user (mine) Both have "Parent process" bash VLC-process begun with Terminal has empty "Process group" (only process id-number) and browser started has "httpd" + (id-number) VLC-process started via browser makes 1000-times more "Mach System Calls" than it's Terminal-started counterpart. Could anyone give me any pointers on how to get this thing working? index.php # $j is a file path to the videofile and is defined before exec('/var/www/Movies/vlc.sh "' . $j . '" > /dev/null 2>&1 & echo $!;'); # If I do this in the given PHP-page it tells me that apache is running # with my username and with the group "staff" like it should be... exec('whoamI'); vlc.sh #!/bin/bash # Activate VLC in 5 seconds to make it the front-most window (sleep 5; open -a VLC) & # Open video file /Applications/VLC.app/Contents/MacOS/VLC --quiet --fullscreen "$1"

    Read the article

  • ASP.NET MVC on Cassini: How can I force the "content" directory to return 304s instead of 200s?

    - by Portman
    Scenario: I have an ASP.NET MVC application developed in Visual Studio 2008. There is a root folder named "Content" that stores images and stylesheets. When I run locally (using Cassini) and browse my application, every resource from the "Content" directory is always downloaded. Using Firebug, I can verify that the web server returns an HTTP 200 ("ok"). Desired: I would like for Cassini to return HTTP 304 ("not modified") instead of 200. This is the behavior when running the site under IIS7. Reasoning: The site I am working on has a large number of static resources (often as many as 40 per page). Browsing the site is very fast on IIS7, because these resources are (correctly) cached by the browser. However, browsing the site on my local machine is painfully slow. Pages that render in under 1 second on IIS7 take over 30 seconds to render on Cassini. It's actually faster for me to upload the entire website every few minutes and test from there. (Yes, I recognize that this is perverse and crazy.) So: how can I instruct/trick Cassini into treating the "Content" directory like IIS7 does?

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • C# WinForms. Multiple Forms in separate threads

    - by Calum Murray
    I'm trying to run an ATM Simulation in C# with Windows Forms that can have more than one instance of an ATM machine transacting with a bank account simultaneously. The idea is to use semaphores/locking to block critical code that may lead to race conditions. My question is this: How can I run two Forms simultaneously on separate threads? In particular, how does all of this fit in with the Application.Run() that's already there? Here's my main class: public class Bank { private Account[] ac = new Account[3]; private ATM atm; public Bank() { ac[0] = new Account(300, 1111, 111111); ac[1] = new Account(750, 2222, 222222); ac[2] = new Account(3000, 3333, 333333); Application.Run(new ATM(ac)); } static void Main(string[] args) { new Bank(); } } ...that I want to run two of these forms on separate threads... public partial class ATM : Form { //local reference to the array of accounts private Account[] ac; //this is a reference to the account that is being used private Account activeAccount = null; private static int stepCount = 0; private string buffer = ""; // the ATM constructor takes an array of account objects as a reference public ATM(Account[] ac) { InitializeComponent(); //Sets up Form ATM GUI in ATM.Designer.cs this.ac = ac; } ... I've tried using Thread ATM2 = new Thread(new ThreadStart(/*What goes in here?*/)); But what method do I put in the ThreadStart constructor, since the ATM form is event-driven and there's no one method controlling it? Thanks, Calum

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Unable to upload large files on FTP using Apache commons-net-3.1

    - by Nitin
    I am trying to upload the one large file ( more than 8 MB) using storeFile(remote, local) method of FTPClient but it results false.It get uploaded with some extra bytes.Following is the code with Output: public class Main { public static void main(String[] args) { FTPClient client = new FTPClient(); FileInputStream fis = null; try { client.connect("208.106.181.143"); client.setFileTransferMode(client.BINARY_FILE_TYPE); client.login("abc", "java"); int reply = client.getReplyCode(); System.out.println("Received Reply from FTP Connection:" + reply); if(FTPReply.isPositiveCompletion(reply)){ System.out.println("Connected Success"); } client.changeWorkingDirectory("/"+"Everbest"+"/"); client.makeDirectory("ETPSupplyChain5.3-EvbstSP3"); client.changeWorkingDirectory("/"+"Everbest"+"/"+"ETPSupplyChain5.3-EvbstSP3"+"/"); FTPFile[] names = client.listFiles(); String filename = "E:\\Nitin\\D-Drive\\Installer.rar"; fis = new FileInputStream(filename); boolean result = client.storeFile("Installer.rar", fis); int replyAfterupload = client.getReplyCode(); System.out.println("Received Reply from FTP Connection replyAfterupload:" + replyAfterupload); System.out.println("result:"+result); for (FTPFile name : names) { System.out.println("Name = " + name); } client.logout(); fis.close(); client.disconnect(); } catch (SocketException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } o/p: Received Reply from FTP Connection:230 Connected Success 32 /Everbest/ETPSupplyChain5.3-EvbstSP3 Received Reply from FTP Connection replyAfterupload:150 result:false

    Read the article

  • MVVM Light - master / child views and dependency properties

    - by Carl Dickinson
    I'm getting an odd problem when implementing a master / child view and custom dependency properties. Within my master view I'm binding the view model declaratively in the XAML as follows: DataContext="{Binding MainViewModelProperty, Source={StaticResource Locator}}" and my MainViewModel is exposing an observable collection which I'm binding to an ItemsControl as follows: <ItemsControl ItemsSource="{Binding Lists}" Height="490" Canvas.Top="10" Width="70"> <ItemsControl.ItemTemplate> <DataTemplate> <Canvas> <local:TaskListControl Canvas.Left="{Binding ListLeft}" Canvas.Top="{Binding ListTop}" Width="{Binding ListWidth}" Height="{Binding ListHeight}" ListDetails="{Binding}"/> </Canvas> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> TaskListControl in turn declares and bind to it's ViewModel and I've also defined a dependency property for the ListDetails property. The ListDetails property is not being set and if I remove the declarative reference to it's viewmodel the dependency property's callback does get fired. Is there a conflict with declaratively binding to viewmodels and definig dependency properties? I really like MVVM Light's blendability and want to perserve with this problem so any help would be apprectiated. If you'd like to receive the source for my project then please ask

    Read the article

  • ASP.NET MVC 2 DisplayFor()

    - by ZombieSheep
    I'm looking at the new version of ASP.NET MVC (see here for more details if you haven't seen it already) and I'm having some pretty basic trouble displaying the content of an object. In my control I have an object of type "Person", which I am passing to the view in ViewData.Model. All is well so far, and I can extact the object in the view ready for display. What I don't get, though, is how I need to call the Html.DisplayFor() method in order to get the data to screen. I've tried the following... <% MVC2test.Models.Person p = ViewData.Model as MVC2test.Models.Person; %> // snip <%= Html.DisplayFor(p => p) %> but I get the following message: CS0136: A local variable named 'p' cannot be declared in this scope because it would give a different meaning to 'p', which is already used in a 'parent or current' scope to denote something else I know this is not what I should be doing - I know that redefining a variable will producte this error, but I don't know how to access the object from the controller. So my question is, how do I pass the object to the view in order to display its properties? (I should add that I am reading up on this in my limited spare time, so it is entirely possible I have missed something fundamental) TIA

    Read the article

  • AS3 httpservice - pass arguments to event handlers by reference

    - by Shawn Simon
    I have this code: var service:HTTPService = new HTTPService(); if (search.Location && search.Location.length > 0 && chkLocalSearch.selected) { service.url = 'http://ajax.googleapis.com/ajax/services/search/local'; service.request.q = search.Keyword; service.request.near = search.Location; } else { service.url = 'http://ajax.googleapis.com/ajax/services/search/web'; service.request.q = search.Keyword + " " + search.Location; } service.request.v = '1.0'; service.resultFormat = 'text'; service.addEventListener(ResultEvent.RESULT, onServerResponse); service.send(); I want to pass the search object to the result method (onServerResponse) but if I do it in a closure it gets passed by value. Is there anyway to do it by reference without searching through my array of search objects for the value returned in the result? Sorry, this is simple but it's been a long day...

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Clarification needed: How does .NET runtime resolve assembly references from parent folder?

    - by aoven
    I have the following output structure of executables in my solution: %ProgramFiles% | +-[MyAppName] | +-[Client] | | | +-(EXE & several DLL assemblies) | +-[Common] | | | +-[Schema Assemblies] | | | | | +-(several DLL assemblies) | | | +-(several DLL assemblies) | +-[Server] | +-(EXE & several DLL assemblies) Each project in solution references different DLL assemblies, some of which are outputs from other projects in solution, and others are plain 3rd-party assemblies. For example, [Client] EXE might reference an assembly in [Common], which is in a different directory branch. All references have "Copy Local" set to false, to mirror the layout of the files in the final installed application. Now, if I take a look at reference properties in the Visual Studio IDE, I see that "Path" of every reference is absolute and that it corresponds to the actual output location of the assembly. That's understandable and correct. As expected, solution compiles and runs just fine. What I don't understand is, why everything seems to work even when I close the IDE, rename the [MyAppName] directory and run the [Client] EXE manually? How does the runtime find the assemblies if the reference paths aren't the same as they were at the time of linking? To be clear - this is actually exactly what I'm after: a semi-dispersed set of application files that run fine regardless of where the [MyAppName] directory is located or even what it's named. I'd just like to know, how and why this works without any specific path resolution on my part. I've read the answers to this similar question, but I still don't get it. Help much appreciated!

    Read the article

  • Why isn't module_filters filetering Mail::IspMailGate in CPAN::Mini?

    - by user304122
    Edited - Ummm - now have a module in schwigon giving same problem !! I am on a corporate PC that forces mcshield on everything that moves. I get blocked when trying to mirror on ... authors/id/J/JV/JV/EekBoek-2.00.01.tar.gz ... updated authors/id/J/JV/JV/CHECKSUMS ... updated Could not stat tmpfile '/cygdrive/t/cpan_mirror/authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz-4712': No such file or directory at /usr/lib/perl5/site_perl/5.10/LWP/UserAgent.pm line 851. authors/id/J/JW/JWIED/Mail-IspMailGate-1.1013.tar.gz At this point, I get the Virus scanner mcshield sticking its Awr in. To maintain my mirror I execute:- #!/usr/bin/perl CPAN::Mini->update_mirror( remote => "http://mirror.eunet.fi/CPAN", local => "/cygdrive/t/cpan_mirror/", trace => 1, errors => 1, module_filters => [ qr/kjkjhkjhkjkj/i, qr/clamav/i, qr/ispmailgate/i, qr/IspMailGate/, qr/Mail-IspMailGate/, qr/mail-ispmailgate/i, ], path_filters => [ qr/ZZYYZZ/, #qr/WIED/, #qr/RJBS/, ] ); It skips OK if I enable the path_filter WIED. Just cannot get it to skip the module failing module to complete other WIED modules. Any ideas?

    Read the article

  • emacs: x-popup-menu max size constraints?

    - by Cheeso
    I'm working on an intellisense or code-completion capability for C#. So far, so good. Right now I have basic completion working. There are 2 ways to request completion. The first cycles through all the potential matches. The second presents a popup menu of the matches. It works for types: And also for local variables: I'm confronting two problems with x-popup-menu: the popup menu can expand to consume all available screen space, when the number of choices is large. Literally it can obscure the entire screen. The silly thing is, it's scrollable. First it expands to consume all available space, then it also becomes scrollable. Is there a way I can limit the maximum size of x-popup-menu? To specify the position of the popup menu, I pass in a position, and x-popup-menu uses that as the *middle*, not the left, of the top line of the menu. Why middle? who knows. What this means is, if I specify (40 . 60) for the location of the menu, and the menu happens to be 100 pixels wide, the menu will extend beyond the left border of the emacs window. You can see this in the 2nd image above. If I knew how wide the popup would be before specifying the position, I could compensate. But I don't. Is there a workaround? Is there a way to get x-popup-menu to take its position as the LEFT rather than the middle.

    Read the article

  • gevent install on x86_64 fails: "undefined symbol: evhttp_accept_socket"

    - by digitala
    I'm trying to install gevent on a fresh EC2 CentOS 5.3 64-bit system. Since the libevent version available in yum was too old for another package (beanstalkd) I compiled/installed libevent-1.4.13-stable manually using the following command: ./configure --prefix=/usr && make && make install This is the output from installing gevent: [gevent-0.12.2]# python setup.py build --libevent /usr/lib Using libevent 1.4.13-stable: libevent.so running build running build_py running build_ext Linking /usr/src/gevent-0.12.2/build/lib.linux-x86_64-2.6/gevent/core.so to /usr/src/gevent-0.12.2/gevent/core.so [gevent-0.12.2]# cd /path/to/my/project [project]# python myscript.py Traceback (most recent call last): File "myscript.py", line 9, in <module> from gevent.wsgi import WSGIServer as GeventServer File "/usr/lib/python2.6/site-packages/gevent/__init__.py", line 32, in <module> from gevent.core import reinit ImportError: /usr/lib/python2.6/site-packages/gevent/core.so: undefined symbol: evhttp_accept_socket I've followed exactly the same steps on a local VirtualBox instance (32-bit) and I'm not seeing any errors. How would I fix this?

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

  • embed video object in html

    - by kc rajput
    Hi I embed a video in html page with swf file. that is running on local host but when i run this on live server. than it dosent work properly. I link flv video in swf file and embed it in html. <script type="text/javascript"> AC_FL_RunContent( 'codebase','http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,28,0','width','600','height','338','title','testing','src','Edit_video/9vi/home-page2','quality','high','pluginspage','http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash','movie','Edit_video/9vi/home-page2' ); //end AC code </script><noscript><object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,28,0" width="600" height="338" title="testing"> <param name="movie" value="Edit_video/9vi/home-page2.swf" /> <param name="quality" value="high" /> <embed src="Edit_video/home-page2.swf" quality="high" pluginspage="http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=ShockwaveFlash" type="application/x-shockwave-flash" width="600" height="338"></embed> </object></noscript>

    Read the article

  • encrypt apache and mysql servers

    - by stormdrain
    I have a question about encrypting disks. I have 2 servers: 1 is apache for web/frontend and it talks to server 2 which is mysql. They are all for intranet only; no external access. I was looking into using PGP or GnuPG to encrypt the disks. I'm not clear, though, as to exactly how this would work. Where would the keys be stored? On the client? On apache? If there is a key on apache to access mysql, does there need to be a key for each user? If so, if key 1 is used to alter some data, would then that data be inaccessible to a user using key 2? And the apache key, would that only be accessible to users with local keys? Is encryption done on the fly? Does it degrade performance? What would be the best approach to encrypt the data on these servers, but have them accessible to users? Thanks!

    Read the article

  • Merging changes to a workspace with uncommitted changes

    - by Kim L
    We've just recently switched over from SVN to Mercurial, but now we are running into problems with our workflow. Example I have my local clone of the repository which I work on. I'm making some highly experimental changes to our code base, something that I don't want to commit before I'm sure it works the way it is supposed to, I don't want to commit it even locally. Now, simultaneously, my co-worker has made some significant improvements/bug fixes which I need. He pushes his commits to our main repository. The question is, how can I merge his changes to my workspace without the requirement that I have to commit all my changes, since I need his changes to test my own code? A more day-to-day problem we have with the exact same workflow is where we have a couple of configuration files which are in the repository. Each developer makes a couple of small environment specific changes to the configuration files, but do not commit the changes. These couple of uncommitted files hinders us from making any merges to our workspace, just like with the example above. Ideally, the configuration files probably shouldn't be in the repository, unfortunately, that's just how it has to be for here unnamed reasons.

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Plink SSH: '-m file' option not working

    - by Technext
    Hi, I am trying to use Plink for running commands on remote server. Both, local & remote machine are Windows. Though I am able to connect to the remote machine using Plink, i am not able to use the '-m file' option. I tried the following three ways but to no avail: Try 1: plink.exe -ssh -pw mypwd gchhabra@machine -m file.txt Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found file.txt only contains one command i.e., dir Try 2: plink.exe -ssh -pw mypwd gchhabra@machine dir Could not chdir to home directory /home/gchhabra: No such file or directory dir: not found Try 3: plink.exe -ssh -pw mypwd gchhabra@machine < file.txt In this case, I get the following output: Using username "gchhabra". ****USAGE WARNING**** This is a private computer system. This computer system, including all ..... including personal information, placed or sent over this system may be monitored. Use of this computer system, authorized or unauthorized, constitutes consent ... constitutes consent to monitoring for these purposes. dirCould not chdir to home directory /home/gchhabra: No such file or directory Microsoft Windows [Version x.x.xxx] (C) Copyright 1985-2003 Microsoft Corp. C:\Program Files\OpenSSH After I get the above prompt, it hangs. Can anyone please help me with this? Regards, Gaurav

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >