Search Results

Search found 4561 results on 183 pages for 'production'.

Page 121/183 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • What is a good reporting service for a simple database/hobbyist setup?

    - by Zombies
    I have a meager production environment running on my PC for a little application that I work on in my spare time. At this point I have the basics setup, MySQL, junit, svn... I am midway through development and I now need to generate various reports. These reports are based on what data is in the database.... Now, my question is this: Is there an existing reporting tool that accepts SQL and generates various reports (via: email, PDF, etc). Some tool which makes writing new reports easy, while also having a somewhat robust set of features. Does this software exist or must I write all of these reports myself?

    Read the article

  • Cookie Value not available, why?

    - by Camran
    I have tested this on my development computer, but now I have uploaded everything to the production server and I cant read out the value of the cookie. I think the problem lies in the Serialization and Unserialization. if (isset($_COOKIE['watched_ads'])){ $expir = time()+1728000; //20 days $ad_arr = unserialize($_COOKIE['watched_ads']); // HERE IS THE PROBLEM $arr_elem = count($ad_arr); if (in_array($ad_id, $ad_arr) == FALSE){ if ($arr_elem>10){ array_shift($ad_arr); } $ad_arr[]=$ad_id; setcookie('watched_ads', serialize($ad_arr), $expir, '/'); } } When I echo this: count($ad_arr) I receive the expected nr, 1 in this case, so there is a value there. But when I echo the value: echo $ad_arr[0]; I get nothing. Completely blank. No text at all. Anybody have a clue? if you need more info about something let me know...

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • MySQL on Windows-7 (64-bit) on 0.0.0.0:3306 rather than 127.0.0.1:3306

    - by Mark Baker
    I've just installed the latest production release of MySQL (64-bit) on my Windows 7 box. It was a straight vanilla install, using all defaults; but phpmyadmin can't see it at all. MySQL is configured as a service to start automatically, and I know it's running because the MySQL GUI tools work correctly. Doing a netstat -a, I see TCP 0.0.0.0:3306 Marks-Netbook:0 LISTENING when I'd expect to see TCP 127.0.0.1:3306 Marks-Netbook:0 LISTENING I don't know if this is the reason phpmyadmin can't connect, but suspect that it is probably the case. Can anybody confirm whether this is the likely cause, and/or suggest how I can reso;lve this?

    Read the article

  • Weblogic 10.3 domain unpacking problem

    - by MarkoU
    Hi, I'm trying to unpack a Weblogic 10.3 domain on one of our production servers (SunOS 5.10), but get the following error: $ /opt/bea10/wlserver_10.3/common/bin/unpack.sh -template=/tmp/CM.jar -domain=/opt/bea10/user_projects/CM Error: failed to create the temporary script file Assuming that this is a priviledge problem: where actually the unpack utility tries to create its temporary script files? The unpack script calls a Java class com.bea.plateng.domain.script.Unpacker, so reading the script itself does not reveal the location. I need to ask the sysadmin for the priviledges, so an exact directory location is needed. Of course, the error message is so vague that this might also be some other issue. Any ideas? BR, Marko P.S. Sorry for cross-posting. I tried this question also on Serverfault but got no replies. Perhaps programmers (like myself) do this kind of stuff anyway.

    Read the article

  • Know if a Visual Studio Website project is recompiling itself in the background?

    - by jdk
    A number of team members update a central ASP.NET dev site (Website project, not a Web application type). Some kinds of changes cause a recompile/rebuild in it. The large website takes a while to recompile and we've noticed it will still seemingly serve out dynamic pages before everything is internally updated. During the site's "gestation" period, our mileage varies while hitting it. Sometimes we get a correct page, sometimes an compilation error page that will eventually be served up without a compilation error, and at other times an unexpected hybrid. Is it possible to query an ASP.NET website application to see if it's currently compiling or rebuilding itself? If so I would write a status page that the team could reference when they're getting weird behaviour, so they would know to wait. Update: Our team often edit files manually on the dev server. For production we'd make pre-compiled pushes. The dev environment is a little more malleable and ever-changing so I'm looking for a solution to reducing the "confusion" there.

    Read the article

  • php $_POST array empty upon form submission...

    - by Mike D
    Hi folks, I'm baffled on this after much googling. This issue is simple, I have a custom CMS i've built that works perfectly on my dev box (Ubuntu/PHP5+/MySQL5+). I just moved it up to the production box for my client and now all form submissions are showing up as empty $_POST arrays. I found a trick to verify the data is actually being passed using "file_get_contents('php://input');" and the data is showing up fine there -- the $_POST/$_REQUEST arrays are always empty. I've also verified the content-type headers are correct as well via firebug (application/x-www-form-urlencoded; charset=utf-8). This issue is happening regardless of whether a form is submitting via AJAX or a regular form submit. Any help is greatly appreciated!

    Read the article

  • How to inline compressed CSS in Rails with assets pipeline

    - by haimg
    I'm trying to inline CSS into my layout. I'm currently using = Rails.application.assets.find_asset('embedded.css').body.html_safe However, the CSS returned is not compressed. I verified what .digest_path asset file exists, and is properly compressed. I can, of course, write a helper that will check if current on-disk compressed asset file exists for a given asset, and use it. However, I think find_asset actually compiles a CSS asset each time it is called -- not good in production. I hope a cleaner solution exists for this issue.

    Read the article

  • Should I use threeten instead of joda-time

    - by Yan Cheng CHEOK
    Hello all, I came across http://www.jroller.com/scolebourne/entry/why_jsr_310_isn_t. 1) I am currently migrating Java Calendar to joda-time. I was wondering, should I use threeten instead of joda-time? Is threeten production ready? 2) Can threeten library and joda-time libraries exist together in a same application? As I am using some 3rd parties libraries, which is using joda-time library too. 3) Will joda-time become an abandon project since there is threeten? Thanks.

    Read the article

  • Can I change the datsource of report model at runtime

    - by Manab De
    I've some adhoc reports developed on some report models which are published on Report Server (we're using SSRS 2008). Everything is running well. Now in our production environment we've near about forty (40) customers who have their own database (each have the same table structures and other database objects). Now the challenge is whenever a customer will log into the report server using windows authentication and trying to view those reports we need to get the SQL data from the proper database only. Reports are designed using the report model and each model has a valid data source which is connected to particular database. We can create forty separate data sources each will be connected to specific database. My question is, is there any way through which we can change the report model data source name dynamically or during runtime based on the customer name so that during execution of the report, SSRS would fetch the data from the correct database but not from any other database. Please help me.

    Read the article

  • Where does ASP.Net get its rendered IDs from?

    - by NeilD
    Hi, I've inherited a project with some nasty JavaScript that depends on hard coded object ids. i.e. There are lots of places where it does things like this var magazine = document.getElementById('repModuleDisplay__ctl3_chkCats_0'); When the page renders in my UAT environment, the HTML looks like this, and everything works OK. <input id="repModuleDisplay__ctl3_chkCats_0" type="checkbox" name="repModuleDisplay:_ctl3:chkCats:0" ... etc However, when I put it on my Production environment, the HTML is suddenly rending like this: <input id="repModuleDisplay_ctl03_chkCats_0" type="checkbox" name="repModuleDisplay$ctl03$chkCats$0" ... etc The difference in ids means that the JavaScript can't find the Element, and fails. In an ideal world, I'd scrap the buggy JavaScript and do it again properly, but for a quick fix, I'd like to know what is causing the difference in rendering between the two environments. Does anyone have any ideas? Thanks, Neil

    Read the article

  • Override l() function in Drupal

    - by Marco
    I'm currently working on a Drupal site (6.*), which when in production mode will be accessed through some kind of http proxy, which means I will have to rewrite all the links for my custom theme if the $_SERVER['HTTP_X_FORWARDED_SERVER'] variable is set to the domain people will access the site from. The site has a lot of internal linking, mostly through Views. My thought is that the easiest way to solve this would be to hook into the url() and/or the l() functions and post process the url before returning it if HTTP_X_FORWARDED_SERVER is set. My problem is that I can't figure out how to hook into these functions, or if it's even possible without touching the core, has anyone had to do this? How did you solve it?

    Read the article

  • How to conditionalize GUI tests using Netbeans/Maven vs maven on command line invocation

    - by Ilane
    I'd like to have a single project pom but have my GUI tests always run when I'm invoking JUnit on Netbeans, but have them conditional (on an environment variable?) when building on the command line (usually for production build - on a headless machine, but sometimes just for build speed). I don't mind instrumenting my JUnit tests for this, as I already have to set up my GUI test infrastructure, but how do I conditionalize my pom! Netbeans 6.5 with Maven plugin. Any ideas how I can accomplish this? Ilane

    Read the article

  • ANTRL: token to text in rewrite rule

    - by Antonio
    I'm building an AST using ANTLR. I want to write a production that match a this string: ${identifier} so, in my grammar file I have: reference : DOLLAR LBRACE IDENT RBRACE -> ^(NODE_VAR_REFERENCE IDENT) ; This works fine. I'm using my own adaptor to emit tree nodes. The rewrite rule used creates for me two nodes: one for NODE_VAR_REFERENCE and one for IDENT. What I want to do is create only one node (for NODE_VAR_REFERENCE token) and this node must have the IDENT token in his "token" field. Is this possible using a rewrite rule? Thanks.

    Read the article

  • Mercurial Branching Model for task features

    - by Stan
    My development env: Windows 7, TortoiseHg, ASP.NET 4.0/MVC3 Test branch: code on test server Prod branch: code on production server This is my current branching model. The reason to branch out every task (feature) is because some features go to live slower. So in above graph, task 1 finished earlier (changeset #5), and merge into test branch for testing. However, due to bug or modification of original request, changesets #10, #12 have been made. While task 2 has finished testing #8 and pushed to live #9 already. My problem is every time when modifying task branch (like #10, #12), I have to do another merge to test branch (#11, #13), this makes the graph very messy. Is there any way to solve this issue? Or any better branching model?

    Read the article

  • Recommend a local LDAP store for development

    - by Paul Stovell
    Our project uses an LDAP repository for storing users. In production this will be Active Directory. For development, we seem to have a couple of options: Install an AD LDS instance that everyone uses Install an AD LDS instance on every developer machine We're trying to keep the 'F5' experience as lightweight as possible, so installing things or relying on a central AD store aren't my favorite ideas. There are other LDAP servers, like Open LDAP. I was hoping there might be an LDAP server that simply talks to an XML file. This would allow us to store the XML file in source control and have something that is fast and works. Our nightly builds would still use AD to pick up any differences, but the hope is since we're using LDAP it should Just Work. Can you recommend an LDAP implementation that works well for zero-config shared-nothing development?

    Read the article

  • MySQL thousands of updates, slowing down.

    - by noryb009
    I need to run a PHP loop for a total of 100, 000 times (about 10, 000 each script-run), and each loop has about 5 MySQL UPDATES to it. When I run the loop 50 times, it takes 3 sec. When I run the loop 1000 times, it takes about 1300 sec. As you can see, MySQL is slowing down ALOT with more UPDATEs. This is an example update: mysql_query("UPDATE table SET `row1`=`row1` +1 WHERE `UniqueValue`='5'"); This is generated randomly from PHP, but I can store it in a variable and run it every n loops. Is there any way to either make MySQL and PHP run at a consistent speed (is PHP storing hidden variables?), or split up the script so they do? Note: I am running this for a development purposes, not for production, so there will only be 1 computer accessing the data.

    Read the article

  • Any recommended VC++ settings for better PDB analysis on release builds

    - by Brian R. Bondy
    Are there any VC++ settings I should know about to generate better PDB files that contain more information? I have a crash dump analysis system in place based on the project crashrpt. Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.

    Read the article

  • When is porting data from MySQL to CouchDB NOT advisable? Seeking cautionary tales

    - by dan
    I've dabbled in CouchDB and I have pretty good MySQL experience. I've also created one production application that uses both. I like MySQL but I've run into scaling/concurrency issues with MySQL that CouchDB advertises itself as a general solution for. The problem is that I have MySQL based applications that are pretty huge, and I don't really know whether it would be a good idea or not to try to port them over to a CouchDB datastore. I don't want to put in a lot of time and effort only to find out that my application is really not a good fit for CouchDB. Is there any sort of informed consensus on when porting a MySQL based app to CouchDB is NOT advisable? Any cautionary tales? I think CouchDB is really cool and want to use it more. I'd also like to know ahead of time what specific types of data querying scenarios CouchDB is really not good for, or if CouchDB can really replace MySQL for all the applications I create going forward.

    Read the article

  • sql server 2005 replication article conflict

    - by Daniel
    Hi all, I have a sql server 2005 database that I want to setup replication for. The problem is that the database has two schemas both of which have a table with the same name in it. For some reason even though the tables are in different schemas the replication creation fails when done through management studio due to conflicting article names (i assume its trying to create the same name for both tables in the different schemas). Is there any workaround for doing this in the studio, I can probably write a script or program to do this but just for this one thign is a bit annoying and it probably wont be allowed to run in production. Perhaps there is a hot fix or something I'm not aware about? Cheers,

    Read the article

  • How to modify the Title Bar text for SQL Server Management Studio?

    - by DaveDev
    Sometimes I keep multiple instances of SQL Server Management Studio 2005 open. I might have the dev database open in one, and the production database open in another. These appear in the Windows task bar with the text "Microsoft SQL Serve...", which means it's impossible to differentiate between them unless I open the window and scroll the Object Explorer up to see what server the window is actually connected to. Is ther any way that I can get the window to display the server name first, and then the name of the application? Like "Dev-DB.database_name - Microsoft SQL Serve..." or whatever?

    Read the article

  • Are there any medium-sized web applications built with CGI::Application that are open-sourced?

    - by mithaldu
    I learn best by taking apart something that already does something and figuring out why decisions were made in which manner. Recently I've started working with Perl's CGI::Application framework, but found i don't really get along well with the documentation (too little information on how to best structure an application with it). There are some examples of small applications on the cgi-app website, but they're mostly structured such that they demonstrate a small feature, but contain mostly of code that one would never actually use in production. Other examples are massively huge and would require way too much time to dig through. And most of them are just stuff that runs on cgiapp, but isn't open source. As such I am looking for something that has most base functionality like user logins, db access, some processing, etc.; is actually used for something but not so big that it would take hours to even set them up. Does something like that exist or am i out of luck?

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • Deploying BlackBerry Applications Across Environments

    - by sethxian
    Has anyone come up with a good solution for deploying BlackBerry applications across different environments? My example is storing URLs and Authentication information in code. In most cases, a developer is going to have a different set of URL's to test against when developing the application vs what the end user is going to hit. The idea is when I go to build for production, I have something swap out the environment settings for the target environment vs manually replacing environment specific code each time. I am currently using eclipse. The only thought I've come up with so far would be to use a resource with encrypted values and have that swap when I run my build. Any ideas?

    Read the article

  • How painful is a django project upload to a live (staging) site?

    - by Ignacio
    Hi, I've getting quite fast with a small django project of mine, which I'm developing locally, of course. But, as I had never worked with django before, I'm not aware of what it implies to upload it and test it on a production server. And I'm quite curious, since I'm very eager to test an early release live. I know there is this document, which I think it'll be really helpful: http://djangobook.com/en/2.0/chapter12/ But, are there any details I should take into account before, during and after the deployment? Any advice or best practices? Thanks.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >