Search Results

Search found 8744 results on 350 pages for 'yann core'.

Page 287/350 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Perl Script in background

    - by himz
    I have a perl script i made to automatically telnet into different servers . but its interface is only command line. To make it more user friendly for general windows users , i need to make GUI for it . My idea is to make GUI in a language like VB,java ,etc and let that call perl script . my script will run in background in a command prompt and whatever the result it displays back in GUI. Got some success. GUI in vb ,I run an instance of CMD in background ,run perl script in that .But that is wer program fails .As perl script runs in a thread for perl , i only get the output when script completes(rather say when it timeout). i need a mechanism where i can interact with the perl script , take output of script and show to user , then take input from user and so on . Please can you all suggest me some way to actually make this happen. PS: No limitation on using any language for GUI (as core work is done by perl script,GUI only there to give appropriate commands to script) Thanks in advance

    Read the article

  • Trying to build the basic python extension example fails (windows)

    - by Alexandros
    Hello, I have Python 2.6 and Visual Studio 2008 running on a Win7 x64 machine. When I try to build the basic python extension example in c "example_nt" as found in the python 2.6 sources distribution, it fails: python setup.py build And this results in: running build running build_ext building 'aspell' extension Traceback (most recent call last): File "setup.py", line 7, in <module> ext_modules = [module1]) File "C:\Python26\lib\distutils\core.py", line 152, in setup dist.run_commands() File "C:\Python26\lib\distutils\dist.py", line 975, in run_commands self.run_command(cmd) File "C:\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Python26\lib\distutils\command\build.py", line 134, in run self.run_command(cmd_name) File "C:\Python26\lib\distutils\cmd.py", line 333, in run_command self.distribution.run_command(command) File "C:\Python26\lib\distutils\dist.py", line 995, in run_command cmd_obj.run() File "C:\Python26\lib\distutils\command\build_ext.py", line 343, in run self.build_extensions() File "C:\Python26\lib\distutils\command\build_ext.py", line 469, in build_extensions self.build_extension(ext) File "C:\Python26\lib\distutils\command\build_ext.py", line 534, in build_extension depends=ext.depends) File "C:\Python26\lib\distutils\msvc9compiler.py", line 448, in compile self.initialize() File "C:\Python26\lib\distutils\msvc9compiler.py", line 358, in initialize vc_env = query_vcvarsall(VERSION, plat_spec) File "C:\Python26\lib\distutils\msvc9compiler.py", line 274, in query_vcvarsall raise ValueError(str(list(result.keys()))) ValueError: [u'path'] What can I do to fix this? Any help will be appreciated

    Read the article

  • How to install DBD::mysql on OS X Server 10.6?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Best architecture for a social media app

    - by Sky
    Hey guys, Im working on promising project that develops a new social media app for web and mobile. We are at begin defining functionalities. Nevertheless, I'm thinking ahead on architecture. So I'm asking: 1 - Whats the best plataform to develop the core of this aplication that will have a Rest API interface. 2 - Whats the best database that will scale and grow with my application. As far as I researched, these were the answers I found most interesting: For database: Cassandra NoSQL DB, amazing scalabilty, amazing write performance, good read performance (will be improved on 0.6). I think i will choose that one. Zookeer for transactions on Cassandra. I think that 2 technologies rly good for that propose. What do you think guys? On the front end that will serve the REST API, i dont have a final candidate. For this one i have questions based on Perfomance X Scalabilty X Fast Development/Maintenance. Java or .Net As far as I researched, brings the best balance of this requisits. Python, pearl and Rail, has the best (Fast Development/Maintenance), but sux on all other. C or C++ I dont even consider, because its (Fast Development/Maintenance) sux... So what do you guy think about it?

    Read the article

  • The project is not configured for Facelets yet in RAD 8.0

    - by Jyoti
    I am trying to create JSF 2 pages. When I create pages using facelets template I get message on top that "The project is not configured for Facelets yet. You need to add a Facelets runtime to the project's classpath". I created file called Test1.xhtml <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11 /DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com /jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com /jsf/html"> <h:head> <title>Test1</title> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <meta name="GENERATOR" content="Rational® Application Developer for WebSphere® Software" /> </h:head> <h:body> Test </h:body> </html> When I run this I see same content of file in explorer, instead of Test. Also Page code is not created for it.

    Read the article

  • JQuery Autocomplete plugin not working with JQuery 1.4.1

    - by Russ Clark
    I've been using the JQuery Autocomplete plugin with JQuery version 1.3.2, and it has been working great. I recently updated JQuery in my project to version 1.4.2, and the Autocomplete plugin is now broken. The JQuery code to add items to a textbox on my web page doesn't seem to be getting called at all. Does anyone know if the JQuery Autocomplete plugin is incompatible with JQuery version 1.4.2, and if there is a fix for this problem? Here is some sample code I've built in an ASP.Net web site (which works fine if I change the JQuery file to jquery-1.3.2.js, but nothing happens using jquery-1.4.2.js): <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script type="text/javascript" src="js/jquery-1.4.2.js" ></script> <script type="text/javascript" src="js/jquery.autocomplete.js" ></script> <script type="text/javascript"> $(document).ready(function() { var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $(':input:text:id$=sapleUser').autocomplete(data); }); </script> </head> <body> <form id="form1" runat="server"> API Reference: <input id="sapleUser" autocomplete="off" type="text" runat="server" /> (try "C" or "E") </form> </body> </html>

    Read the article

  • Internal bug tracking tickets - Redmine, Trac, or JIRA

    - by Tai Squared
    I've been looking at setting up Redmine, Trac, or JIRA to track issues. I want to be able to have my development team create internal tickets that are never seen by clients, while clients can create/edit tickets that are seen by the internal team. From the Trac documentation, you can set permissions to create or view tickets, but it doesn't seem to allow for viewing only certain tickets. It may be possible with Trac Fine Grained Permissions, but doesn't appear so. The Redmine documentation mentions: Define your own roles and set their permissions in a click but doesn't appear to have the level of granularity. From the JIRA documentation: At the moment JIRA is only able to support security at a project level or issue level. Currently there is no field level security available. According to this question, Redmine doesn't support internal tickets, so you would have to use multiple projects. I don't want a situation where I would have to create multiple projects - one internal, one external and have the external tickets brought into the internal repository. It seems as this would lead to unnecessary overhead and inevitably, the projects wouldn't be in sync. Is there any way with any of these products (possibly through a plug-in if not in the core product itself) to specify these permissions, or simplify having two projects with different users and permissions that must still share information?

    Read the article

  • strange redefined symbols

    - by Chris H
    I included this header into one of my own: http://codepad.org/lgJ6KM6b When I compiled I started getting errors like this: CMakeFiles/bin.dir/SoundProjection.cc.o: In function `Gnuplot::reset_plot()': /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/new:105: multiple definition of `Gnuplot::reset_plot()' CMakeFiles/bin.dir/main.cc.o:project/gnuplot-cpp/gnuplot_i.hpp:962: first defined here CMakeFiles/bin.dir/SoundProjection.cc.o: In function `Gnuplot::set_smooth(std::basic_string, std::allocator const&)': project/gnuplot-cpp/gnuplot_i.hpp:1041: multiple definition of `Gnuplot::set_smooth(std::basic_string, std::allocator const&)' CMakeFiles/bin.dir/main.cc.o:project/gnuplot-cpp/gnuplot_i.hpp:1041: first defined here CMakeFiles/bin.dir/SoundProjection.cc.o:/usr/include/eigen2/Eigen/src/Core/arch/SSE/PacketMath.h:41: multiple definition of `Gnuplot::m_sGNUPlotFileName' I know it's hard to see in this mess, but look at where the redefinitions are taking place. They take place in files like /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/new:105. How is the new operator getting information about a gnuplot header? I can't even edit that file. How could that ever even be possible? I'm not even sure how to start debugging this. I hope I've provided enough information. I wasn't able to reproduce this in a small project. I mostly just looking for tips on how to find out why this is happening, and how to track it down. Thanks.

    Read the article

  • Layering Cocoa WebView - Drawing on top?

    - by Josh
    http://stackoverflow.com/questions/1618498/webview-in-core-animation-layer The only other thread I can find is the above which doesn't necessarily fit my needs. Is there a reliable way to simply draw a view on top of a webview? I've tried to layer a regular NSView on top of WebView, and it draws right at first, but any movement in the webview (scrolling the page etc) appears to invalidate the view and produces visual artifacts. I've tried: [[[NSApp mainWindow] contentView] addSubview:view positioned:NSWindowAbove relativeTo:webView]; No luck there, same problems -- z-ordering doesn't seem to work unless I'm missing something. Is this just a limitation of webviews? I also tried implementing the view above as a window, which worked much better (just controlled the location of the window programmatically). However, the desired behavior is for the user to enter some text into this window, but for it not to steal "focus" -- ie the main window goes inactive (the x - + go gray) when the user clicks on the text field in the new window. Any way to avoid that? I've tried subclassing NSWindow and overriding canBecomeKey (return YES) and canBecomeMain (return NO) but the window still steals focus. Josh

    Read the article

  • How do I build latest Tycho

    - by hedefalk
    I've tried to build Tycho now for a couple of hours and just can't get it to work. I've followed these instructions: https://docs.sonatype.org/display/TYCHO/BuildingTycho So, I've downloaded Eclipse 3.6RC2 and Delta-packs linked from this instruction (is it for 3.5 only?): http:// (remove space) aniefer.blogspot.com/2009/06/using-deltapack-in-eclipse-35.html I've added the DeltaPack to the TargetPlatform inside of the Eclipse-installation. I've installed Maven: Apache Maven 3.0-beta-1 (r935667; 2010-04-19 19:00:39+0200) I can run the first bootstrap of the build, but the second fails: mvn clean install -e -V -Pbootstrap-2 -Dtycho.targetPlatform=$TYCHO_TARGET_PLATFORM ERROR] Internal error: java.lang.RuntimeException: Could not resolve plugin org.eclipse.core.net.linux.x86_null -> [Help 1] I've tried different stuff, I built an older revision against 3.5 as in this blogpost: http:// (remove space) divby0.blogspot.com/2010/03/im-in-love-with-tycho-08-and-maven-3.html and that actually built a running maven, but that version then can't find the tycho plugin: org.apache.maven.plugin.version.PluginVersionResolutionException: Error resolving version for plugin 'org.codehaus.tycho:maven-tycho-plugin' from the repositories [local (/Users/viktor/.m2/repository), central (http://repo1.maven.org/maven2)]: Plugin not found in any plugin repository I thought that the point was that the plugin was going to build in when I had built a Tycho-dist…? Sorry about the links, stackoverflows spam-protection doesn't let me post more than url yet

    Read the article

  • How to use <c:out value=...> taglib

    - by artaxerxe
    I have the class A: package a; public class A { private int x = 9; public int getX() { return x; } } and the ajsp.jsp file: <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <%@ taglib prefix="c" uri="http://java.sun.com/jstl/core" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> </head> <body> <jsp:useBean id = "a" class = "a.A" /> <c:out value = "${a.x}" /> </body> </html> when i run it, it gives an error : org.apache.jasper.JasperException: /ajsp.jsp(11,0) PWC6236: According to TLD or attribute directive in tag file, attribute value does not accept any expressions if instead of <c:out value = "${a.x}" /> i use <jsp:getProperty property="x" name="a"/> it goes perfect. So, what is the problem? Thank advance.

    Read the article

  • What data structure to use / data persistence

    - by Dave
    I have an app where I need one table of information with the following fields: field 1 - int or char field 2 - string (max 10 char) field 3 - string (max 20 char) field 4 - float I need the program to filter on field 1 based upon a segmented control and select a field 2 from a picker. From this data I need to look up field 4 to use in a calculation. Total records will be about 200. I never see it go above 400 - 500. I am going to use a singleton which I am able to do, I just need help with the structure for this with data persistence. What type of data structure should I use for this and should I use NSNumber, NSString, etc. or old data types like float, Char, etc. I thought about a struct put into an array but there is probably a better way. This is new to me so any help or reference to examples would be great. I also thought about a plist or dictionary but it looks like it is just a lookup and a field which obviously won't work. Core data looked like overkill to me. Also, with any recommendation how should I get initial data into it? I want the user to be able to edit and add to the database. Sorry for the old terms, you can see what generation I am from... Thanks in advance!!!!

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • Flex - Increase timeout on a PHP service function call

    - by Travesty3
    I'm using Flash Builder 4 Beta 2. I have it connecting to a PHP service. The way I set this up was using the wizard, so I didn't actually write the code to connect to it. The service looks like this: package services.flash { import mx.rpc.AsyncToken; import com.adobe.fiber.core.model_internal; import mx.rpc.AbstractOperation; import valueObjects.CustomDatatype8; import valueObjects.NewUsageData; import mx.collections.ItemResponder; import mx.rpc.remoting.RemoteObject; import mx.rpc.remoting.Operation; import com.adobe.fiber.services.wrapper.RemoteObjectServiceWrapper; import com.adobe.fiber.valueobjects.AvailablePropertyIterator; import com.adobe.serializers.utility.TypeUtility; [ExcludeClass] internal class _Super_FLASH extends RemoteObjectServiceWrapper { // Constructor public function _Super_FLASH() { // initialize service control _serviceControl = new RemoteObject(); var operations:Object = new Object(); var operation:Operation; operation = new Operation(null, "sendCommand"); operation.resultType = Object; operations["sendCommand"] = operation; ... } } One of the functions that I'm calling fetches users from a MySQL database. There are about 30,000 users right now. The service seems to timeout when fetching more than around 22,000 rows, I get the "Channel Disconnected before an acknowledgement was received" error. If I call the PHP script from a browser, it fetches them all with no problems at all, however. I have tried increasing the timeout in the PHP script (which didn't work), but obviously this isn't the problem since the browser is able to pull them up with no problems. Is there a way to increase the timeout of the PHP service in Flash Builder? I'm a bit of a noob when it comes to Flash, so please be descriptive. Thanks in advance!

    Read the article

  • Copy whole SQL Server database into JSON from Python

    - by Oli
    I facing an atypical conversion problem. About a decade ago I coded up a large site in ASP. Over the years this turned into ASP.NET but kept the same database. I've just re-done the site in Django and I've copied all the core data but before I cancel my account with the host, I need to make sure I've got a long-term backup of the data so if it turns out I'm missing something, I can copy it from a local copy. To complicate matters, I no longer have Windows. I moved to Ubuntu on all my machines some time back. I could ask the host to send me a backup but having no access to a machine with MSSQL, I wouldn't be able to use that if I needed to. So I'm looking for something that does: db = {} for table in database: db[table.name] = [row for row in table] And then I could serialize db off somewhere for later consumption... But how do I do the table iteration? Is there an easier way to do all of this? Can MSSQL do a cross-platform SQLDump (inc data)? For previous MSSQL I've used pymssql but I don't know how to iterate the tables and copy rows (ideally with column headers so I can tell what the data is). I'm not looking for much code but I need a poke in the right direction.

    Read the article

  • django inner redirects

    - by Zayatzz
    Hello I have one project that in my own development computer (uses mod_wsgi to serve the project) caused no problems. In live server (uses mod_fastcgi) it generates 500 though. my url conf is like this: # -*- coding: utf-8 -*- from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^', include('jalka.game.urls')), ) and # -*- coding: utf-8 -*- from django.conf.urls.defaults import * from django.contrib.auth import views as auth_views urlpatterns = patterns('jalka.game.views', url(r'^$', view = 'front', name = 'front',), url(r'^ennusta/(?P<game_id>\d+)/$', view = 'ennusta', name = 'ennusta',), url(r'^login/$', auth_views.login, {'template_name': 'game/login.html'}, name='auth_login'), url(r'^logout/$', auth_views.logout, {'template_name': 'game/logout.html'}, name='auth_logout'), url(r'^arvuta/$', view = 'arvuta', name = 'arvuta',), ) and .htaccess is like that: Options +FollowSymLinks RewriteEngine on RewriteOptions MaxRedirects=10 # RewriteCond %{HTTP_HOST} . RewriteCond %{HTTP_HOST} ^www\.domain\.com RewriteRule (.*) http://domain.com/$1 [R=301,L] AddHandler fastcgi-script .fcgi RewriteCond %{HTTP_HOST} ^jalka\.domain\.com$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/fifa2010.fcgi/$1 [QSA,L] RewriteCond %{HTTP_HOST} ^subdomain\.otherdomain\.eu$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*) cgi-bin/django.fcgi/$1 [QSA,L] Notice, that i have also other project set up with same .htaccess and that one is running just fine with more complex urls and views fifa2010.fcgi: #!/usr/local/bin/python # -*- coding: utf-8 -*- import sys, os DOMAIN = "domain.com" APPNAME = "jalka" PREFIX = "/www/apache/domains/www.%s" % (DOMAIN,) # Add a custom Python path. sys.path.insert(0, os.path.join(PREFIX, "htdocs/django/Django-1.2.1")) sys.path.insert(0, os.path.join(PREFIX, "htdocs")) sys.path.insert(0, os.path.join(PREFIX, "htdocs/jalka")) # Switch to the directory of your project. (Optional.) os.chdir(os.path.join(PREFIX, "htdocs", APPNAME)) # Set the DJANGO_SETTINGS_MODULE environment variable. os.environ['DJANGO_SETTINGS_MODULE'] = "%s.settings" % (APPNAME,) from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") Alan

    Read the article

  • Multhreading in Java

    - by Vijay Selvaraj
    I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing. The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure. Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes? Any suggestion will be helpful.

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • Is it possible to implement an infinite IEnumerable without using yield with only C# code?

    - by sinelaw
    Edit: Apparently off topic...moving to Programmers.StackExchange.com. This isn't a practical problem, it's more of a riddle. Problem I'm curious to know if there's a way to implement something equivalent to the following, but without using yield: IEnumerable<T> Infinite<T>() { while (true) { yield return default(T); } } Rules You can't use the yield keyword Use only C# itself directly - no IL code, no constructing dynamic assemblies etc. You can only use the basic .NET lib (only mscorlib.dll, System.Core.dll? not sure what else to include). However if you find a solution with some of the other .NET assemblies (WPF?!), I'm also interested. Don't implement IEnumerable or IEnumerator. Notes The closest I've come yet: IEnumerable<int> infinite = null; infinite = new int[1].SelectMany(x => new int[1].Concat(infinite)); This is "correct" but hits a StackOverflowException after 14399 iterations through the enumerable (not quite infinite). I'm thinking there might be no way to do this due to the CLR's lack of tail recursion optimization. A proof would be nice :)

    Read the article

  • Spring configuration of C3P0 with Hibernate?

    - by HDave
    I have a Spring/JPA application with Hibernate as the JPA provider. I've configured a C3P0 data source in Spring via: <bean id="myJdbcDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <!-- Connection properties --> <property name="driverClass" value="$DS{database.class}" /> <property name="jdbcUrl" value="$DS{database.url}" /> <property name="user" value="$DS{database.username}" /> <property name="password" value="$DS{database.password}" /> <!-- Pool properties --> <property name="minPoolSize" value="5" /> <property name="maxPoolSize" value="20" /> <property name="maxStatements" value="50" /> <property name="idleConnectionTestPeriod" value="3000" /> <property name="loginTimeout" value="300" /> I then specified this data source in the Spring entity manager factory as follows: <bean id="myLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="dataSource" ref="myJdbcDataSource" /> </bean> However, I recently noticed while browsing maven artifacts a "hibernate-c3p0". What is this? Is this something I need to use? Or do I already have this configured properly?

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Accuracy of OpenGL ES Instrument

    - by Rob Jones
    I'm developing a game for the iPhone. I've decided that 30FPS is plenty so I've written some code that only allows the App to present the render buffer every 1/30 of a second. When I tried to verify this with Instruments I got varying information. On an iPod Touch (2009 edition, 32G) it reports 30 FPS for Core Animation Frames Per Second. On an iPhone 3G I get wildly varying results. And not just less than 30 FPS. I see 30 FPS on a regular basis. It actually seems to hang closer to 36-39. To investigate this anomaly I added my own FPS to the app and update it once per second. I stays right at 29 FPS on both devices. So, does anyone have any suggestions as to what might be going on? I expect Instruments to be accurate so it really concerns me that it appears inaccurate. It makes me think I have a bug somewhere, but I sure can't find it.

    Read the article

  • Why is VB6 still so widely used?

    - by Uri
    Note that this question is not meant to start an argument, I am genuinely curious: Back in the 90s I used to work for a large CPU maker and we were building some debuggers. All our core logic was in C++, but the GUI was made in VB6. We couldn't figure out MFC and it was too much a hassle. We glued together VB6 and C++ using COM which we created via ATL. Fast forward to 2009, having been mostly in the Java world I am looking at job boards and seeing more VB6 jobs than I expected. I genuinely thought that VB6 was extinct, especially since VB.NET supposedly solved a lot of the problems with the VB6 which at the time felt a lot like a scripting language than a true OOP language. So what happened? Why is new code still written in it? Isn't there a better way to glue C++ and VB.NET? Note that I haven't used VB.NET, I just understand that it is a much more "stricter" language than VB6 was. Even with Option Explicit or whatever it was.

    Read the article

  • Can I ignore a SIGFPE resulting from division by zero?

    - by Mikeage
    I have a program which deliberately performs a divide by zero (and stores the result in a volatile variable) in order to halt in certain circumstances. However, I'd like to be able to disable this halting, without changing the macro that performs the division by zero. Is there any way to ignore it? I've tried using #include <signal.h> ... int main(void) { signal(SIGFPE, SIG_IGN); ... } but it still dies with the message "Floating point exception (core dumped)". I don't actually use the value, so I don't really care what's assigned to the variable; 0, random, undefined... EDIT: I know this is not the most portable, but it's intended for an embedded device which runs on many different OSes. The default halt action is to divide by zero; other platforms require different tricks to force a watchdog induced reboot (such as an infinite loop with interrupts disabled). For a PC (linux) test environment, I wanted to disable the halt on division by zero without relying on things like assert.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >