Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 219/319 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • Why don't I have access to setReadDataOnly() or enableMemoryOptimization() in PHPExcel?

    - by Edward Tanguay
    I've downloaded PHPExcel 1.7.5 Production. I would like to use setReadDataOnly() and enableMemoryOptimization() as discussed in their forum here and in stackoverflow questions. However when I use them, I get a Call to undefined method error. Is there another version or some plugin or library that I have not installed? What do I have to do to access these methods? $objPHPExcel = PHPExcel_IOFactory::load("data/".$file_name); $objPHPExcel->setReadDataOnly(true); //Call to undefined method $objPHPExcel->enableMemoryOptimization(); //Call to undefined method

    Read the article

  • How can I do block-oriented disk I/O with Java? Or similar for a B+ tree

    - by Sanoj
    I would like to implement an B+ tree in Java and try to optimize it for disk based I/O. Is there an API for accessing individual disk blocks from Java? or is there an API that can do similar block-oriented access that fits my purpose? I would like to create something like Tokyo Cabinet in 100% Java. Is there anyone that knows what Java only databases like JavaDB is using in the back-end for this? I know that there are probably other languages than Java that can do this better, but I do this in a learning purpose only.

    Read the article

  • Accessing variable from ARGV

    - by snaken
    I'm writing a cPanel postwwwact script, if you're not familiar with the script its run after a new account is created. it relies on the user account variable being passed to the script which i then use for various things (creating databases etc). However, I can't seem to find the right way to access the variable i want. I'm not that good with shell scripts so i'd appreciate some advice. I had read somewhere that the value i wanted would be included in $ARGV{'user'} but this simply gives "root" as opposed to the value i need. I've tried looping through all the arguments (list of arguments here) like this: #!/bin/sh for var do touch /root/testvars/$var done and the value i want is in there, i'm just not sure how to accurately target it. There's info here on doing this with PHP or Perl but i have to do this as a shell script. EDIT Ideally i would like to be able to call the variable by something other than $1 or $2 etc as this would create issues if an argument is added or removed Any ideas?

    Read the article

  • Git Use Remote Source Durring Push

    - by ThinkBohemian
    I have a local git repository a "central" repo at github. I'm working on a part of a project, while a friend is working on a related piece that is its entirely seperate repo, is it possible for me to simply link directly to my friends repo? For example, the app is called widgets. I have all my code in widgets/app/mycode and my friend is writing code that goes into widgets/plugins/awesome/hiscode. I want to be able to always have http://github.com/mycode/widgets/plugins/hiscode to be a direct link or clone to http://github.com/hiscode/awesome ? It could be possible i'm missing something basic in my question or knowledge of git, if so please ask, and i'll be happy to try to fill in the blanks. I am deploying to my production site via capistrano, so maybe a script of some kind may be easier?? I don't know (that's why i'm posting)!!

    Read the article

  • Posting within PHP and receiving only cookie

    - by faya
    Hello I have a question. It might be sound ridiculous, but let me explain what I want to accomplish. Right now I try to embed open source forum to by site and I want to leave forum and my site databases split. When my site users are logging in I want them also automatically be logged in to the forum system. For that I want to login within my PHP code after receiving username and password in post, but I don't know how I can get only cookies in response. I have found out that I can use curl_init(), curl_exec(), curl_close() functionality, but response from curl_exec returns whole response(page content, cookies, headers). Is there a way to receive only cookies? P.S. - If my design is totally wrong please give an advise how I can embed this functionality! I would be very thankful!

    Read the article

  • Getting fields of a class through reflection

    - by Water Cooler v2
    I've done it a gazillion times in the past and successfully so. This time, I'm suffering from lapses of amnesia. So, I am just trying to get the fields on an object. It is an embarrassingly simple and stupid piece of code that I am writing in a test solution before I do something really useful in production code. Strangely, the GetFieldsOf method reports a zero length array on the "Amazing" class. Help. class Amazing { private NameValueCollection _nvc; protected NameValueCollection _myDict; } private static FieldInfo[] GetFieldsOf(string className, string nameSpace = "SomeReflection") { Type t; return (t = Assembly.GetExecutingAssembly().GetType( string.Format("{0}.{1}", nameSpace, className) )) == null ? null : t.GetFields(); }

    Read the article

  • Coldfusion CFC creation taking a variable amout of time to execute.

    - by Bazza
    I've been doing some logging of object creation times in our open account process in production. Periodically, initializing an object would take way longer than expected. By initializing I mean calling it's init() and passing a couple of arguments that may be simple variables or objects. e.g. <cfset validateObj = createObject("component", "compExample").init( productionMode = VARIABLES.productionMode, ipWhiteListed = isWhiteListed, ipLocatorObj = VARIABLES.ipLocatorObj ) /> Thats all that happens in init() methods. Generally the execution time would be 0ms, but at random times I might get 3 or 3.5 seconds. This is not specific to one particular server or to our generally busy period. It appears to be quite random. One thought was that these templates were being evicted from our template cache as they are not especially frequently used, although I checked cfstat on a number of servers and the max CP/Sec is -1. Running CF 8,0,1 Has anybody else ever come across this?

    Read the article

  • How do I implement .net plugins without using AppDomains?

    - by Abtin Forouzandeh
    Problem statement: Implement a plug-in system that allows the associated assemblies to be overwritten (avoid file locking). In .Net, specific assemblies may not be unloaded, only entire AppDomains may be unloaded. I'm posting this because when I was trying to solve the problem, every solution made reference to using multiple AppDomains. Multiple AppDomains are very hard to implement correctly, even when architected at the start of a project. Also, AppDomains didn't work for me because I needed to transfer Type across domains as a setting for Speech Server worfklow's InvokeWorkflow activity. Unfortunately, sending a type across domains causes the assembly to be injected into the local AppDomain. Also, this is relevant to IIS. IIS has a Shadow Copy setting that allows an executing assembly to be overwritten while its loaded into memory. The problem is that (at least under XP, didnt test on production 2003 servers) when you programmatically load an assembly, the shadow copy doesnt work (because you are loading the DLL, not IIS).

    Read the article

  • Rails diff model config in dev or prod environment

    - by Denis
    Hi, I've a model which use paperclip, in dev env I want to store files on the file system. In production I want to store them on my s3 account. How to configure my model to reflet this difference? Here is my model class Photo < ActiveRecord::Base has_attached_file :photo, :styles => { :medium => "200x200>", :thumb => "100x100>" }, :storage => :s3, :s3_credentials => "#{Rails.root}/config/s3.yml", :path => "/:style/:filename" end

    Read the article

  • Query two tables from different schema

    - by Guru
    Hi- I have two different schemas in Oracle (say S1, S2). And two tables in those schemas(say S1.Table1, S2.Table2). I want to query these two tables from schema S1. Both S1 and S2 are in different databases. From DB1 - Schema S1, I want to do something like this, select T1.Id from S1.Table1 T1 , S2.Table2 T2 Where T1.Id = T2.refId I know one way of doing this would be creating a DB Link for the second schema and use it in querying. But sadly, I don't have priv to create DB link. Is there some way to do without DB link, like, in TOAD, you can compare two schema objects. But again, two schema objects and it is general comparision. Not like querying them. Any ideas, suggestions are greatly appriciated. Thanks in advance.

    Read the article

  • Zip Code to City/State and vice-versa in a database?

    - by Simucal
    I'm new to SQL and relational databases and I have what I would imagine is a common problem. I'm making a website and when each user submits a post they have to provide a location in either a zip code or a City/State. What is the best practice for handling this? Do I simply create a Zip Code and City and State table and query against them or are there ready made solutions for handling this? I'm using SQL Server 2005 if it makes a difference. I need to be able to retrieve a zip code given a city/state or I need to be able to spit out the city state given a zip code.

    Read the article

  • Find groups with both validated, unvalidated users

    - by Matchu
    (Not my real MySQL schema, but illustrates what needs done.) Users can belong to many groups, and groups have many users. users: id INT validated TINYINT(1) groups: id INT name VARCHAR(20) groups_users: group_id INT user_id INT I need to find groups that contain both validated and unvalidated users (validated being 1 or 0, respectively), in order to perform a specific manual maintenance task. There are thousands of users, all belong to at least one group, but a group usually only has 2-5 users. This is a live production server, so I could probably craft a query myself, but the last one I tried took a matter of minutes before I killed it. (I'm not one of those brilliant SQL wizards.) I suppose I could take the server down for maintenance, but, if possible, a query that gets this job done in a matter of seconds would be fantastic. Thanks!

    Read the article

  • If don't own proprietary database engine, what is best way to convert database to mysql?

    - by John Robertson
    I work for a very small company. I was recently faced with the question of whether there is a good way to convert a proprietary database to a MySQL database without owning the proprietary database engine e.g. if one is given a large oracle database file (or choose your favorite proprietary database engine format), but doesn't have a license for the oracle database engine, is there a good, perfectly reliable way to convert it to a MySQL database format that can be read with the MySQL database engine? My question is very vague as to which proprietary format is the source just because there would be multiple sources and it looks like they would be "various and sundry". My suspicion is that there is no perfectly reliable way, especially for a wide variety of proprietary databases. If there are a few proprietary formats for which this is possible, I would still be interested in knowing, though "various and sundry" is probably the real issue. Minimizing cost, effort and correct conversion are key so I think this is probably is the not possible list. -John

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) after schema changes?

    - by Rabbi
    I have already setup syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two SQL Express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • Running my web site in a 32-bit application pool on a 64-bit OS.

    - by Jeremy H
    Here is my setup: Dev: - Windows Server 2008 64-bit - Visual Studio 2008 - Solution with 3 class libraries, 1 web application Staging Web Server: - Windows Server 2008 R2 64-bit - IIS7.5 Integrated Application Pool with 32-bit Applications Enabled In Visual Studio I have set all 4 of my projects to compile to 'Any CPU' but when I run this web application on the web server with the 32-bit application pool it times out and crashes. When I run the application pool in 64-bit mode it works fine. The production web server requires me to run 32-bit application pool in 64-bit OS which is why I have this configured in this way on the staging web server. (I considered posting on ServerFault but the server part seems to be working fine. It is my code specifically that doesn't seem to want to run in 32-bit application pool which is why I am posting here.)

    Read the article

  • How do I delete all tables in a database using SqlCommand?

    - by mafutrct
    Unlike the other posts about the task "delete all tables", this question is specifically about using SqlCommand to access the database. A coworker is facing the problem that no matter how it attempts it, he can't delete all tables from a database via SqlCommand. He states that he can't perform such action as long as there is an active connection - the connection by the SqlCommand itself. I believe this should be possible and easy, but I don't have much of a clue about databases so I came here to ask. It would be awesome if you could provide a short code example in C# (or any .NET language, for that matter). If you require additional information, just leave a comment.

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

  • Adding a MongoDB collection to Netbeans

    - by Saif Bechan
    In Netbeans I have an option to add my mysql databases to netbeans. This way I can easily browse and so small queries. Now I am working on a MongoDB project, and I want to know if it is possible to use the same functionality. I see that on the website of mongo there is a list of drivers, and I see that you can add drivers in netbeans. I do not know if the same thing, or if this can be used. I have tried google, but no luck. Anyone have an idea?

    Read the article

  • WHERE IN Query with two recordsets in Access VBA

    - by Henry Owens
    Hi All, My first post here, so i hope this is the right area. I am currently trying to compare 2 recordsets, one of which has come from an Excel named range, and the other from a table in the Access database. The code for each is: Set existingUserIDs = db.OpenRecordset("SELECT Username FROM UserData") Set IDsToImport = exceldb.OpenRecordset("SELECT Username FROM Named_Range") The problem is that I would like to somehow compare these two recordsets, without looping (there is a very large number of records). Is there any way to do a join or similar on these recordsets? I can not do a join before creating the recordsets, due to the fact that one is coming from Excel, and the other from Access, so they are two different DAO databases. The end goal is that I will choose only the usernames that do not already exist in the access table to be imported (so in an SQL query, it would be a NOT IN(table)). Thanks for any assistance you can lend! Regards, Bricky.

    Read the article

  • Container for database-like searches

    - by Milan Babuškov
    I'm looking for some STL, boost, or similar container to use the same way indexes are used in databases to search for record using a query like this: select * from table1 where field1 starting with 'X'; or select * from table1 where field1 like 'X%'; I thought about using std::map, but I cannot because I need to search for fields that "start with" some text, and not those that are "equal to". I could create a sorted vector or list and use binary search (breaking the set in 2 in each step by reading the element in the middle and seeing if it's more or less than 'X'), but I wonder if there is some ready-made container I could use without reinventing the wheel?

    Read the article

  • Filtering coluns in SQL Server replication - how?

    - by truthseeker
    Hi, I need to replicate some data from two tables in one database to another databases. I used snapshot replication. The issue is that I would like to replicate only some selected columns and the others should stay with untouched data. I don't want to loose their data. The sours of those columns is other system. So I need to replicate only data from my columns. Do anybody know how to achieve this?

    Read the article

  • Sharing a database connection with included classes in a Sinatra application

    - by imightbeinatree
    I'm converting a part of a rails application to its own sinatra application. It has some beefy work to do and rather than have a million helps in app.rb, I've separated some of it out into classes. Without access to rails I'm rewriting finder several methods and needing access to the database inside of my class. What's the best way to share a database connection between your application and a class? Or would you recommend pushing all database work into its own class and only having the connection established there? Here is what I have in in app.rb require 'lib/myclass' configure :production do MysqlDB = Sequel.connect('mysql://user:password@host:port/db_name') end I want to access it in lib/myclass.rb class Myclass def self.find_by_domain_and_stub(domain, stub) # want to do a query here end end I've tried several things but nothing that seems to work well enough to even include as an example.

    Read the article

  • Weblogic server lib VS instance libext

    - by Sebastien Lorber
    Hello, I use weblogic 10. Its provide an Oracle JDBC driver 10.2.0.2 (in the server/lib on weblogic home). Actually someone at work put a long time ago a 10.2.0.3 driver in instance libext folder. But in production we got a jdbc driver stack (nullpointer :O) and by reverse engineering it seems we are using driver 10.2.0.2. We know that we could change the driver in the server/lib of weblogic but i want to understand. Isn't libext supposed to override server libs like META-INF libs override libext? By the way we are in a strange situation: - We have 2 EAR, and for the exact same treatment in those 2, one will sometime throw the oracle driver nullpointer while the other doesn't - I wonder if one ear isn't using 10.2.0.2 while the other is using 10.2.0.3 (i saw a bug fixed that could fit to our problem for this version). - I need to look better but at first sight both ear use the exact same datasource set in weblogic JNDI resources Any idea?

    Read the article

  • Why does tokyo tyrant slow down exponentially even after adjusting bnum?

    - by HenryL
    Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall. It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?

    Read the article

  • Get the newest file from directory structure year/month/date/time

    - by Radek
    I store backups of databases in a directory structure year/month/day/time/backup_name an example would be basics_mini/2012/11/05/012232/RATIONAL.0.db2inst1.NODE0000.20110505004037.001 basics_mini/2012/11/06/012251/RATIONAL.0.db2inst1.NODE0000.20110505003930.001 note that timestamp from the backup file cannot be used. Before the automation testing starts the server time is set to 5.5.2011 So the question is how I can get the latest file if I pass the "base directory" (basics_mini) to some function that I am going to code. My thoughts are that I list the base directory and sort by time to get the year. Then I do the same for month, day and time. I wonder if there is any "easier" solution to that in php.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >