Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 68/79 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Computing, storing, and retrieving values to and from an N-Dimensional matrix

    - by Adam S
    This question is probably quite different from what you are used to reading here - I hope it can provide a fun challenge. Essentially I have an algorithm that uses 5(or more) variables to compute a single value, called outcome. Now I have to implement this algorithm on an embedded device which has no memory limitations, but has very harsh processing constraints. Because of this, I would like to run a calculation engine which computes outcome for, say, 20 different values of each variable and stores this information in a file. You may think of this as a 5(or more)-dimensional matrix or 5(or more)-dimensional array, each dimension being 20 entries long. In any modern language, filling this array is as simple as having 5(or more) nested for loops. The tricky part is that I need to dump these values into a file that can then be placed onto the embedded device so that the device can use it as a lookup table. The questions now, are: What format(s) might be acceptable for storing the data? What programs (MATLAB, C#, etc) might be best suited to compute the data? C# must be used to import the data on the device - is this possible given your answer to #1?

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • How do I get rid of this "(" using regex?

    - by Solignis
    Hi there, I was moving along on a regex expression and I have hit a road block I can't seem to get around. I am trying to get rid of "(" in the middle of a line of text using regex, there were 2 but I figured out how to get the one on the end of the line. its the one in the middle I can hack out. Here is the snippet I am searching for in the config file. I put 2 examples. guestOSAltName = "Ubuntu Linux (64-bit)" guestOSAltName = "Microsoft Windows 2000 Professional" Here is the snippet I am working on. if ($vmx_file =~ m/^\bguestOSAltName\b\s+\S\s+\W(?<GUEST_OS> .+[^")])\W/xm) { $virtual_machines{$vm}{"OS"} = "$+{GUEST_OS}"; } else { $virtual_machines{$vm}{"OS"} = "N/A"; } I am thinking the problem is I cannot make a match to "(" because the expression before that is to ".+" so that it matches everything in the line of text, be it alphanumeric or whitespace or even symbols like hypens. Any ideas how I can get this to work? This is what I am getting for an output from a hash dump. $VAR1 = { 'NS02' => { 'ID' => '144', 'Version' => '7', 'OS' => 'Ubuntu Linux (64-bit', 'VMX' => '/vmfs/volumes/datastore2/NS02/NS02.vmx', 'Architecture' => '64-bit' },

    Read the article

  • using replace to produce javascript code, django

    - by durdenk
    I want to use highcharts with my django site but it requires a comlex javascript code such as below. So I wanted to get this script in my python code and replace apropriate portions then write it in my template, first question is, is this a dump way to do that for a person not knowing javascript. I can read it tough. Second question is, Why I cant replace this string. Lets say this string is a variable like this. lineChartsTemplate = """ ... ... """ if I try and do lineChartsTemplate .replace('dataCategory', dataCategory) it basically suppossed to change dataCategory text with my dataCategory variable, but no such luck. I need guidance here. thx. $(function () { var chart = new Highcharts.Chart({ chart: { renderTo: 'container', type: 'bar' }, xAxis: { categories: dataCategory }, yAxis: { }, legend: { layout: 'vertical', floating: true, backgroundColor: '#FFFFFF', align: 'right', verticalAlign: 'top', y: 60, x: -60 }, tooltip: { formatter: function() { return '<b>'+ this.series.name +'</b><br/>'+ this.x +': '+ this.y; } }, plotOptions: { }, series: [{ data: dataList , name : 'Satislar'}] }); });

    Read the article

  • INSERT..ON DUPLICATE KEY UPDATE - but NOT using the duplicate key to compare.

    - by calumbrodie
    I am trying to solve a problem I have inherited with poor treatment of different data sources. I have a user table that contains BOTH good and evil users. create table `users`( `user_id` int(13) NOT NULL AUTO_INCREMENT , `email` varchar(255) , `name` varchar(255) , PRIMARY KEY (`user_id`) ); In this table the primary key is currently set to be user_id. I have another table ('users_evil') which contains ONLY the evil users (all the users from this table are included in the first table) - the user_id's on this table do NOT correspond to those in the first table. I want to have all my users in one table, and simply flag which are good and which are evil. What I want to do is alter the user table and add a column ('evil') which defaults to 0. I then want to dump the data from my 'users_evil') table and then run an INSERT..ON DUPLICATE KEY UPDATE with this data into the first table (setting 'evil'=1 where the emails match) The problem is that the 'PK' is set to the user_id and not the 'email'. Any suggestions, or even another strategy to successfully achive this. Can I run this statement but treat another column as PK only for the duration of the statement.

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • Mongodb update: how to check if an update succeeds or fails?

    - by zmg
    I think the title pretty much says it all. I'm working with Mongodb in PHP using the pecl driver. My updates are working great, but I'd like to build some error checking into my funciton(s). I've tried using lastError() in a pretty simple function: function system_db_update_object($query, $values, $database, $collection) { $connection = new Mongo(); $collection = $connection->$database->$collection; $connection->$database->resetError(); //Added for debugging $collection->update( $query, array('$set' => $values)); //$errorArray = $connection->$database->lastError(); var_dump($connection->$database->lastError());exit; // Var dump and /Exit/ } But pretty much regardless of what I try to update (whether it exists or not) I get these same basic results: array(4) { ["err"]=> NULL ["updatedExisting"]=> bool(true) ["n"]=> float(1) ["ok"]=> float(1) } Any help or direction would be greatly appreciated.

    Read the article

  • Is it not possible to make a C++ application "Crash Proof"?

    - by Enno Shioji
    Let's say we have an SDK in C++ that accepts some binary data (like a picture) and does something. Is it not possible to make this SDK "crash-proof"? By crash I primarily mean forceful termination by the OS upon memory access violation, due to invalid input passed by the user (like an abnormally short junk data). I have no experience with C++, but when I googled, I found several means that sounded like a solution (use a vector instead of an array, configure the compiler so that automatic bounds check is performed, etc.). When I presented this to the developer, he said it is still not possible.. Not that I don't believe him, but if so, how is language like Java handling this? I thought the JVM performs everytime a bounds check. If so, why can't one do the same thing in C++ manually? UPDATE By "Crash proof" I don't mean that the application does not terminate. I mean it should not abruptly terminate without information of what happened (I mean it will dump core etc., but is it not possible to display a message like "Argument x was not valid" etc.?)

    Read the article

  • Converting json data to an HTML table

    - by dnaluz
    I have an array of data in php and I need to display this data in a HTML table. Here is what an example data set looks like. Array( Array ( [comparisonFeatureId] => 1182 [comparisonFeatureType] => Category [comparisonValues] => Array ( [0] => Not Available [1] => Standard [2] => Not Available [3] => Not Available ) [featureDescription] => Rear Seat Heat Ducts ),) The dataset is a comparison of 3 items (shown in the comparisonValues index of the array) In the end I need the row to look similar to this <tr class="alt2 section_1"> <td><strong>$record['featureDescription']</strong></td> <td>$record['comparisonValues'][0]</td> <td>$record['comparisonValues'][1]</td> <td>$record['comparisonValues'][2]</td> <td>$record['comparisonValues'][3]</td> </tr> The problem I am coming across is how to best do this. Should I create the entire table HTML on the server side pass it over an ajax call and just dump pre-rendered HTML data into a div or pass the json data and render the table client side. Any elegant suggestions? Thanks in advanced.

    Read the article

  • Should '#include' and 'using' statements be repeated in both header and implementation files (C++)?

    - by Dr. Monkey
    I'm fairly new to C++, but my understanding is that a #include statement will essentially just dump the contents of the #included file into the location of that statement. This means that if I have a number of '#include' and 'using' statements in my header file, my implementation file can just #include the header file, and the compiler won't mind if I don't repeat the other statements. What about people though? My main concern is that if I don't repeat the '#include', 'using', and also 'typedef' (now that I think of it) statements, it takes that information away from the file in which it's used, which could lead to confusion. I am just working on small projects at the moment where it won't really cause any issues, but I can imagine that in larger projects with more people working on them it could become a significant issue. An example follows: //Unit.h #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; class Unit { public: //public members private: //private members //unrelated side-question: should private members //even be included in the header file? } ; //Unit.cpp #include "Unit.h" //The following are all redundant from a compiler perspective: #include <string> #include <ostream> #include "StringSet.h" using std::string; using std::ostream; //implementation goes here

    Read the article

  • How can I pass a hash to a Perl subroutine?

    - by Vishalrix
    In one of my main( or primary) routines,I have two or more hashes. I want the subroutine foo() to recieve these possibly-multiple hashes as distinct hashes. Right now I have no preference if they go by value, or as references. I am struggling with this for the last many hours and would appreciate help, so that I dont have to leave perl for php! ( I am using mod_perl, or will be) Right now I have got some answer to my requirement, shown here From http://forums.gentoo.org/viewtopic-t-803720-start-0.html # sub: dump the hash values with the keys '1' and '3' sub dumpvals { foreach $h (@_) { print "1: $h->{1} 3: $h->{3}\n"; } } # initialize an array of anonymous hash references @arr = ({1,2,3,4}, {1,7,3,8}); # create a new hash and add the reference to the array $t{1} = 5; $t{3} = 6; push @arr, \%t; # call the sub dumpvals(@arr); I only want to extend it so that in dumpvals I could do something like this: foreach my %k ( keys @_[0]) { # use $k and @_[0], and others } The syntax is wrong, but I suppose you can tell that I am trying to get the keys of the first hash ( hash1 or h1), and iterate over them. How to do it in the latter code snippet above?

    Read the article

  • How would I make this faster? Parsing Word/sorting by heading [on hold]

    - by Doof12
    Currently it takes about 3 minutes to run through a single 53 page word document. Hopefully you all have some advice about speeding up the process. Code: import win32com.client as win32 from glob import glob import io import re from collections import namedtuple from collections import defaultdict import pprint raw_files = glob('*.docx') word = win32.gencache.EnsureDispatch('Word.Application') word.Visible = False oFile = io.open("rawsort.txt", "w+", encoding = "utf-8")#text dump doccat= list() for f in raw_files: word.Documents.Open(f) doc = word.ActiveDocument #whichever document is active at the time doc.ConvertNumbersToText() print doc.Paragraphs.Count for x in xrange(1, doc.Paragraphs.Count+1):#for loop to print through paragraphs oText = doc.Paragraphs(x) if not oText.Range.Tables.Count >0 : results = re.match('(?P<number>(([1-3]*[A-D]*[0-9]*)(.[1-3]*[0-9])+))', oText.Range.Text) stylematch = re.match('Heading \d', oText.Style.NameLocal) if results!= None and oText.Style != None and stylematch != None: doccat.append((oText.Style.NameLocal, oText.Range.Text[:len(results.group('number'))],oText.Range.Text[len(results.group('number')):])) style = oText.Style.NameLocal else: if oText.Range.Font.Bold == True : doccat.append(style, oText) oFile.write(unicode(doccat)) oFile.close() The for Paragraph loop obviously takes the most amount of time. Is there some way of identifying and appending it without going through every Paragraph?

    Read the article

  • Why does this code leak? (simple codesnippet)

    - by Ela782
    Visual Studio shows me several leaks (a few hundred lines), in total more than a few MB. I traced it down to the following "helloWorld example". The leak disappears if I comment out the H5::DataSet.getSpace() line. #include "stdafx.h" #include <iostream> #include "cpp/H5Cpp.h" int main(int argc, char *argv[]) { _CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF ); // dump leaks at return H5::H5File myfile; try { myfile = H5::H5File("C:\\Users\\yyy\\myfile.h5", H5F_ACC_RDONLY); } catch (H5::Exception& e) { std::string msg( std::string( "Could not open HDF5 file.\n" ) + e.getCDetailMsg() ); throw msg; } H5::Group myGroup = myfile.openGroup("/so/me/group"); H5::DataSet myDS = myGroup.openDataSet("./myfloatvec"); hsize_t dims[1]; //myDS.getSpace().getSimpleExtentDims(dims, NULL); // <-- here's the leak H5::DataSpace dsp = myDS.getSpace(); // The H5::DataSpace seems to leak dsp.getSimpleExtentDims(dims, NULL); //dsp.close(); // <-- doesn't help either std::cout << "Dims: " << dims[0] << std::endl; // <-- Works as expected return 0; } Any help would be appreciated. I've been on this for hours, I hate unclean code...

    Read the article

  • C++ gdb GUI

    - by HappyDude
    Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking for a GUI that: Handles all the basics like stepping over & into code, watch variables and breakpoints Understands and can display the contents of complex & nested C++ data types Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types Can handle threaded applications and switch between different threads to step through or view the state of Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? Edit: Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.

    Read the article

  • Why are my bound parameters all identical (using Linq)?

    - by Scott Stafford
    When I run this snippet of code: string[] words = new string[] { "foo", "bar" }; var results = from row in Assets select row; foreach (string word in words) { results = results.Where(row => row.Name.Contains(word)); } I get this SQL: -- Region Parameters DECLARE @p0 VarChar(5) = '%bar%' DECLARE @p1 VarChar(5) = '%bar%' -- EndRegion SELECT ... FROM [Assets] AS [t0] WHERE ([t0].[Name] LIKE @p0) AND ([t0].[Name] LIKE @p1) Note that @p0 and @p1 are both bar, when I wanted them to be foo and bar. I guess Linq is somehow binding a reference to the variable word rather than a reference to the string currently referenced by word? What is the best way to avoid this problem? (Also, if you have any suggestions for a better title for this question, please put it in the comments.) Note that I tried this with regular Linq also, with the same results (you can paste this right into Linqpad): string[] words = new string[] { "f", "a" }; string[] dictionary = new string[] { "foo", "bar", "jack", "splat" }; var results = from row in dictionary select row; foreach (string word in words) { results = results.Where(row => row.Contains(word)); } results.Dump(); Dumps: bar jack splat

    Read the article

  • How to map string keys to unique integer IDs?

    - by Marek
    I have some data that comes regularily as a dump from a data souce with a string natural key that is long (up to 60 characters) and not relevant to the end user. I am using this key in a url. This makes urls too long and user unfriendly. I would like to transform the string keys into integers with the following requirements: The source dataset will change over time. The ID should be: non negative integer unique and constant even if the set of input keys changes preferrably reversible back to key (not a strong requirement) The database is rebuilt from scratch every time so I can not remember the already assigned IDs and match the new data set to existing IDs and generate sequential IDs for the added keys. There are currently around 30000 distinct keys and the set is constantly growing. How to implement a function that will map string keys to integer IDs? What I have thought about: 1. Built-in string.GetHashCode: ID(key) = Math.Abs(key.GetHashCode()) is not guaranteed to be unique (not reversible) 1.1 "Re-hashing" the built-in GetHashCode until a unique ID is generated to prevent collisions. existing IDs may change if something colliding is added to the beginning of the input data set 2. a perfect hashing function I am not sure if this can generate constant IDs if the set of inputs changes (not reversible) 3. translate to base 36/64/?? does not shorten the long keys enough What are the other options?

    Read the article

  • Importing a Mercurial repository automatically (e.g. SVN Externals)

    - by dawmail333
    I have a project that I am developing built off CodeIgniter. The main part of the project is a private system I am creating, but I want to add it to source control, to gain all the associated goodies. Now I'm using Mercurial, so I did the whole hg init bit, so I've got the repository set up. Now, one of the things I've done is to make a library for CodeIgniter, which I use in this project. Now I want to make this library open, so I need a separate repo for that. For anyone unfamiliar with CodeIgniter library development, here's a reference: application /config <- configuration files /libraries <- library logic in here Now I will probably develop a few more libraries in the course of this project, so I can't just dump a repo in the application folder without clumping them all together. What I did was this: dev/ci/library <- library here dev/project <- project here Now in both of those folders, I have made a repository. What I want to do is make the project repository automatically reference the library repository, so I can have a private and a public repository, as I explained earlier. The main way to do this, I have read, is to use subrepositories, but I can only find examples on nested ones (which are unclear anyway, I find). How do I make it reference another repository like svn:externals?

    Read the article

  • How to make your more experienced and authoritative teammates not to create 'fast temporary solution

    - by Roman
    I'm currently working on a small short-lived project. But despite the size it's complicated enough with very unclear logic. That's why it was started by more experienced developers. They work on it from time to time because it's not their main project. They made some code drafts with numerous places which 'would be rewritten in the nearest future'. After that they added several another 'temporary pieces'. And then again.. So, now the project is a mess of 'half-working' pieces of code with some hardcoded values, like file names or some constants which 'will be replaced latter with working parts'. The API is awful (nobody thinks about it actually). And it's really, really hard to do development now (for me it's the main and only project). I caught myself thinking that I spent about an hour every day just to understand again all that tricky 'temporary' things and API weaknesses. And after that hour my brain melts. I can't just say that "guys, your code smells like a trash dump". What's the correct way?

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • Import Excel into Rails app

    - by Jack
    Hi, I am creating a small rails app for personal use and would like to be able to upload excel files to later be validated and added to the database. I had this working previously with csv files, but this has since become impractical. Does anyone know of a tutorial for using the roo or spreadsheet gem to upload the file, display the contents to the user and then add to the database (after validating)? I know this is quite specific, but I want to work through this step by step. All I have so far is an 'import' view: <% form_for :dump, :url=>{:controller=>"students", :action=>"student_import"}, :html => { :multipart => true } do |f| -%> Select an Excel File : <%= f.file_field :excel_file -%> <%= submit_tag 'Submit' -%> <% end -%> But have no idea how to access this uploaded file in the controller. Any suggestions/help would be welcomed. Thanks

    Read the article

  • Is this an error in "More Effective C++" in Item28?

    - by particle128
    I encountered a question when I was reading the item28 in More Effective C++ .In this item, the author shows to us that we can use member template in SmartPtr such that the SmartPtr<Cassette> can be converted to SmartPtr<MusicProduct>. The following code is not the same as in the book,but has the same effect. #include <iostream> class Base{}; class Derived:public Base{}; template<typename T> class smart{ public: smart(T* ptr):ptr(ptr){} template<typename U> operator smart<U>() { return smart<U>(ptr); } ~smart(){delete ptr;} private: T* ptr; }; void test(const smart<Base>& ) {} int main() { smart<Derived> sd(new Derived); test(sd); return 0; } It indeed can be compiled without compilation error. But when I ran the executable file, I got a core dump. I think that's because the member function of the conversion operator makes a temporary smart, which has a pointer to the same ptr in sd (its type is smart<Derived>). So the delete directive operates twice. What's more, after calling test, we can never use sd any more, since ptr in sd has already been delete. Now my questions are : Is my thought right? Or my code is not the same as the original code in the book? If my thought is right, is there any method to do this? Thanks very much for your help.

    Read the article

  • Call to a member function ... on a non-object

    - by jayceekay
    i have an object which is instantiated in an initialize file, which is called with every request. the name is right, so why is it telling me that oourls isn't an object and that redirectLoggedIn isn't its method? a var dump on oourls says NULL. but it's instantiated, and the backtrace at the bottom shows that it goes through initialization and instantiates it. pretty small snippet of code, here's the relevant bit: if($email) { global $session; $session->grantLogin($email); global $oourls; $oourls->redirectLoggedIn(); } else { return false; } and here's the output of debug_print_backtrace i threw in above the oourls method call because i'm completely confused: #0 accounts::verifyEmailRegisterAccount(37a6274c8f4bfa5c537b40e8e04d634a) called at [\public\includes\default\verifyemail.php:16] #1 require_once(\public\includes\default\verifyemail.php) called at [\support\php\ObjectOrientedURLs.class.php:48] #2 ObjectOrientedURLs->mhqqrVerifyemail(Array ([0] => 37a6274c8f4bfa5c537b40e8e04d634a)) #3 ReflectionMethod->invoke(ObjectOrientedURLs Object (), Array ([0] => 37a6274c8f4bfa5c537b40e8e04d634a)) called at [\support\php\ObjectOrientedURLs.class.php:280] #4 ObjectOrientedURLs->parseAndInvokeURL() called at [\support\php\ObjectOrientedURLs.class.php:255] #5 ObjectOrientedURLs->__construct() called at [\support\php\initialize.php:76] #6 require_once(\support\php\initialize.php) called at [\public\index.php:2]

    Read the article

  • CakePHP: Missing database table

    - by Justin
    I have a CakePHP application that is running fine locally. I uploaded it to a production server and the first page that uses a database connection gives the "Missing Database Table" error. When I look at the controller dump, it's complaining about the first table. I've tried a variety of things to fix this problem, with no luck: I've confirmed that at the command line I can login with the given MySQL credentials in database.php I've confirmed this table exists I've tried using the MySQL root credentials (temporarily) to see if the problem lies with permissions of the user. The same error appeared. My debug level is currently set to 3 I've deleted the entire contents of /app/tmp/cache I've set 777 permissions on /app/tmp* I've confirmed that I can run DESCRIBE commands at the commant line MySQL when logged in with the MySQL credentials used by by the application I've verified that the CakePHP log file only contains the error I'm setting in the browser window. I've tried all the suggestions I could find in similar postings on SO I've Googled around and didn't find any other ideas I think I've eliminating the obvious problems and my research isn't turning anything up. I feel like I'm missing something obvious. Any ideas?

    Read the article

  • Frequently getting booted from Securemote VPN-1 Connection

    - by Nick L.
    I connect to my office's network remotely through the Checkpoint SecuRemote E75 (R75) VPN application, but recently it's been causing me a lot of issues when connecting from home. I connect through a WRT54GL router running DD-WRT v24 firmware, so I have no clue if that affects anything. I took a dump of the logs for Checkpoint and here are the messages that populate when I get booted but I have no clue how to decipher them and my IT department is completely clueless in terms of resolving the situation. I'm thinking the router is blocking the keep alive connection or something along those lines, but I have no idea how to fix the problem. [ 2388 2932][30 Aug 22:47:49][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:50][TR_EVENTS] TR_EVENTS::Raise: Running registered cb... [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInfSendAsynchronic: __start__ 22:47:50.606 [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInf::TrComInfSendAsynchronic: Acquiring mutex [ 2388 2932][30 Aug 22:47:50][messaging] messaging::send_all: Sending Message {{ 2 }} , len 185 [ 2388 2932][30 Aug 22:47:50][tcpserver] TcpMultiPipe::pipe_if_send: Message (193 bytes) written successfully to socket 0x224 [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInf::TrComInfSendAsynchronic: Released mutex [ 2388 2932][30 Aug 22:47:50][TrComInf] TrComInf::TrComInfSendAsynchronic: __end__ 22:47:50.606. Total time - 0 milliseconds [ 2388 2932][30 Aug 22:47:50][TR_SRV2CL] TR_SRV2CL::SendNotification: Successfully sent notification of type TR_NOTIFICATION_TRAFFIC_IDLE [ 2388 2932][30 Aug 22:47:50][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:50][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:50][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:50][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:51][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:51][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:51][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:51][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: Called with ctrl_code 14 [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: System got SERVICE_CONTROL_SESSIONCHANGE message event type 4 session 2 [ 2388 2392][30 Aug 22:47:52][TracService] service_ctrl_ex: Console/remote disconnect has occured in session 2 [ 2388 2932][30 Aug 22:47:52][vna] vna_trap: received VNA_TRAP_FORWARD_PACKET [ 2388 2932][30 Aug 22:47:52][vna] vna_traffic_fwd_do : forwarding packet with 98 bytes [ 2388 2932][30 Aug 22:47:52][TR_OFFICE_MODE] TrOfficeMode::OmSendIpFrameCB: Packet to destination 192.168.162.15 of protocol 17 [ 2388 2932][30 Aug 22:47:52][TR_OFFICE_MODE] TR_OFFICE_MODE::TrOfficeMode::OmSendIpFrameCB: Not sending packet because it's not to the enc domain [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEnum: Returning connection at position 1 [ 2388 2932][30 Aug 22:47:52][TR_EVENTS] TR_EVENTS::Raise: Running registered cb... [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventMainHandler: no gw handle [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventMainHandler: Current connection state is TR_CONN_STATE_CONNECTED. Receiving event of type CONN_EVENT_SYSTEM_SESSION_LOGOFF. Connection handle = 1. System state: TR_SYSTEM_STATE_RUNNING [ 2388 2932][30 Aug 22:47:52][CONFIG_MANAGER] suspend_tunnel_while_locked return value false, because it is Default variable. Scope: site 12.43.159.10, gw NULL ,user USER [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventConnectedHandler: no gw handle [ 2388 2932][30 Aug 22:47:52][TR_CONN_MANAGER] TR_CONN_MANAGER::ConnEventConnectedHandler: receive session logoff event while connected. cancelling connection Thanks all. :)

    Read the article

  • Oracle Application Server Performance Monitoring and Tuning (CPU load high)

    - by Berkay
    Oracle Application Server Performance Monitoring and Tuning (CPU load high) i have just hired by a company and my boss give me a performance issue to solve as soon as possible. I don't have any experience with the Java EE before at the server side. Let me begin what i learned about the system and still couldn't find the solution: We have an Oracle Application Server (10.1.) and Oracle Database server (9.2.), the software guys wrote a kind of big J2EE project (X project) using specifically JSF 1.2 with Ajax which is only used in this project. They actively use PL/SQL in their code. So, we started the application server (Solaris machine), everything seems OK. users start using the app starting Monday from different locations (app 200 have user accounts,i just checked and see that the connection pool is set right, the session are active only 15 minutes). After sometime (2 days) CPU utilization gets high,%60, at night it is still same nothing changed (the online user amount is nearly 1 or 2 at this time), even it starts using the CPU allocated for other applications on the same server because they freed If we don't restart the server, the utilization becomes %90 following 2 days, application is so slow that end users starts calling. The main problem is software engineers say that code is clear, and the System and DBA managers say that we have the correct configuration,the other applications seems OK why this problem happens only for X application. I start copying the DB to a test platform and upgrade it to the latest version, also did in same with the application server (Weblogic) if there is a bug or not. i only tested by myself only one user and weblogic admin panel i can track the threads and dump them. i noticed that there are some threads showing as a hogging. when i checked the manuals and control the trace i see that it directs me the line number where PL/SQL code is called from a .java file. The software eng. says that yes we have really complex PL/SQL codes but what's the relation with Application server? this is the problem of DB server, i guess they're right... I know the question has many holes, i'd like to give more in detail but i appreciate the way you guide me. Thanks in advance ... Edit: The server both in CPU and Memory enough to run more complex applications

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >