Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 473/1016 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • algorithm - How to sort a 0/1 array with 2n/3 comparisons?

    - by Jackson Tale
    In Algorithm Design Manual, there is such an excise 4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using comparisons. For each comparison of two values x and y, the algorithm learns which of x < y, x = y, or x y holds. (a) Give an algorithm to sort in n - 1 comparisons in the worst case. Show that your algorithm is optimal. (b) Give an algorithm to sort in 2n/3 comparisons in the average case (assuming each of the n inputs is 0 or 1 with equal probability). Show that your algorithm is optimal. For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons. But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups? Thanks

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • MySQL nested CASE error I need help with?

    - by AK
    What I am trying to do here is: IF the records in table todo as identified in $done have a value in the column recurinterval then THEN reset date_scheduled column ELSE just set status_id column to 6 for those records. This is the error I get from mysql_error() ... You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_sche' at line 2 How can I make this statement work? UPDATE todo CASE recurinterval != 0 AND recurinterval IS NOT NULL THEN SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) ELSE SET status_id = 6 WHERE todo_id IN ($done) END The following mySQL statement worked just fine before I revised like above. UPDATE todo SET date_scheduled = CASE recurunit WHEN 'DAY' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval DAY) WHEN 'WEEK' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval WEEK) WHEN 'MONTH' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval MONTH) WHEN 'YEAR' THEN DATE_ADD(date_scheduled, INTERVAL recurinterval YEAR) END WHERE todo_id IN ($done) AND recurinterval != 0 AND recurinterval IS NOT NULL

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Unserializing an API return object (PHP/Ebay API)

    - by DavidYell
    I have been working with the Ebay api for a project and have found it great. I have however found a problem now, more PHP related. When I read my items from Ebay, I store a bunch of details in the database. Currently, just for the sake of it really, I serialize the whole return object and store it in the database in a related table. The idea being, that when I display my information, I have all the details to hand should I need them. The problem arises in that the pricing information is always in a sub object. [ConvertedAdjustmentAmount] => __PHP_Incomplete_Class Object ( [__PHP_Incomplete_Class_Name] => eBayAmountType [_] => 0 [currencyID] => USD ) As you can see when I unserialize my object, my cunning plan falls foul of the Incomplete class problem. I have checked the following question, without success. http://stackoverflow.com/questions/965611/forcing-access-to-php-incomplete-class-object-properties The main issue lies, as far as I can see, in that the price class is stored in the Ebay api, so how do I recreate it? I have been reading this page, http://uk3.php.net/manual/en/function.unserialize.php and trying to figure out, unserialize_callback_func which I can't figure out either, so any help would be appreciated!

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • C++ MACRO that will execute a block of code and a certain command after that block.

    - by Poni
    void main() { int xyz = 123; // original value { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends int original_value = xyz; // should be 123 } void main() { int xyz = 123; // original value MACRO_NAME(xyz = 123) // the macro takes the code code that should be executed at the end of the block. { // code block starts xyz++; if(xyz < 1000) xyz = 1; } // code block ends << how to make the macro execute the "xyz = 123" statement? int original_value = xyz; // should be 123 } Only the first main() works. I think the comments explain the issue. It doesn't need to be a macro but to me it just sounds like a classical "macro-needed" case. By the way, there's the BOOST_FOREACH macro/library and I think it does the exact same thing I'm trying to achieve but it's too complex for me to find the essence of what I need. From its introductory manual page, an example: #include <string> #include <iostream> #include <boost/foreach.hpp> int main() { std::string hello( "Hello, world!" ); BOOST_FOREACH( char ch, hello ) { std::cout << ch; } return 0; }

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • pyOpenSSL and the WantReadError

    - by directedition
    I have a socket server that I am trying to move over to SSL on python 2.5, but I've run into a snag with pyOpenSSL. I can't find any good tutorials on using it, so I'm operating largely on guesses. Here is how my server sets up the socket: ctx = SSL.Context(SSL.SSLv23_METHOD) ctx.use_privatekey_file ("mykey.pem") ctx.use_certificate_file("mycert.pem") sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) addr = ('', int(8081)) sock.bind(addr) sock.listen(5) Here is how it accepts clients: sock.setblocking(0) while True: if len(select([sock], [], [], 0.25)[0]): client_sock, client_addr = sock.accept() client = ClientGen(client_sock) And here is how it sends/receives from the connected sockets: while True: (r, w, e) = select.select([sock], [sock], [], 0.25) if len(r): bytes = sock.recv(1024) if len(w): n_bytes = sock.send(self.message) It's compacted, but you get the general idea. The problem is, once the send/receive loop starts, it dies right away, before anything has been sent or received (that I can see anyway): Traceback (most recent call last): File "ClientGen.py", line 50, in networkLoop n_bytes = sock.send(self.message WantReadError The manual's description of the 'WantReadError' is very vague, saying it can come from just about anywhere. What am I doing wrong?

    Read the article

  • new Stateful session bean instance without calling lookup

    - by kislo_metal
    Scenario: I have @Singleton UserFactory (@Stateless could be) , its method createSession() generating @Stateful UserSession bean by manual lookup. If I am injecting by DI @EJB - i will get same instance during calling fromFactory() method(as it should be) What I want - is to get new instance of UserSession without preforming lookup. Q1: how could I call new instance of @Stateful session bean? Code: @Singleton @Startup @LocalBean public class UserFactory { @EJB private UserSession session; public UserFactory() { } @Schedule(second = "*/1", minute = "*", hour = "*") public void creatingInstances(){ try { InitialContext ctx = new InitialContext(); UserSession session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } } @Schedule(second = "*/1", minute = "*", hour = "*") public void fromFactory(){ System.out.println("in singleton UUID " +session.getSessionUUID()); } public UserSession creatSession(){ UserSession session2 = null; try { InitialContext ctx = new InitialContext(); session2 = (UserSession) ctx.lookup("java:global/inferno/lic/UserSession"); System.out.println("in singleton UUID " +session2.getSessionUUID()); } catch (NamingException e) { e.printStackTrace(); } return session2; } } As I understand, calling of session.getClass().newInstance(); is not a best idea Q2 : is it true? I am using glassfish v3, ejb 3.1.

    Read the article

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • C/C++ macro/template blackmagic to generate unique name.

    - by anon
    Macros are fine. Templates are fine. Pretty much whatever it works is fine. The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL. Precise problem: I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends. For example, consider: class GlTranslate { GLTranslate(float x, float y, float z); { glPushMatrix(); glTranslatef(x, y, z); } ~GlTranslate() { glPopMatrix(); } }; Manual solution: { GlTranslate foo(1.0, 0.0, 0.0); // I had ti give it a name ..... } // auto popmatrix Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block? Thanks!

    Read the article

  • How to add an array value to the middle of an associative array?

    - by Citizen
    Lets say I have this array: $array = array('a'=>1,'z'=>2,'d'=>4); Later in the script, I want to add the value 'c'=>3 before 'z'. How can I do this? EDIT: Yes, the order is important. When I run a foreach() through the array, I do NOT want this newly added value added to the end of the array. I am getting this array from a mysql_fetch_assoc() EDIT 2: The keys I used above are placeholders. Using ksort() will not achieve what I want. EDIT 3: http://www.php.net/manual/en/function.array-splice.php#88896 accomplishes what I'm looking for but I'm looking for something simpler. EDIT 4: Thanks for the downvotes. I gave feedback to your answers and you couldn't help, so you downvoted and requested to close the question because you didn't know the answer. Thanks. EDIT 5: Take a sample db table with about 30 columns. I get this data using mysql_fetch_assoc(). In this new array, after column 'pizza' and 'drink', I want to add a new column 'full_dinner' that combines the values of 'pizza' and 'drink' so that when I run a foreach() on the said array, 'full_dinner' comes directly after 'drink'

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • ZF2 - How to use the Hydrator/exchangeArray() to populate a nested object

    - by Dominic Watson
    I've got an object with values that are stored in my database. My object also contains another object which is stored in the database using just the ID of it (foreign key). http://framework.zend.com/manual/2.0/en/modules/zend.stdlib.hydrator.html Before the Hydrator/exchangeArray functionality in ZF2 you would use a Mapper to grab everything you need to create the object. Now I'm trying to eliminate this extra layer by just using Hydration/exchangeArray to populate my objects but am a bit stuck on creating the nested object. Should my entity have the Inner object's table injected into it so I can create it if the ID of it is passed to my 'exchangeArray' ? Here are example entities as an example. // Village id, name, position, square_id // Map Square id, name, type Upon sending square_id to my Village's exchangeArray() function. It would get the mapTable and use hydrator to pull in the square using the ID I have. It doesn't seem right to be to have mapper instances inside my entity as I thought they should be disconnected from anything but it's own entity specific parameters and functionality?

    Read the article

  • Error with MySQL Query

    - by Ken
    Okay, I must be an idiot, because this is my 3rd question for today. Here's my code: date_default_timezone_set("America/Los_Angeles"); include("mainmenu.php"); $con = mysql_connect("localhost", "root", "********"); if(!$con){ die(mysql_error()); } $usrname = $_POST['usrname']; $fname = $_POST['fname']; $lname = $_POST['lname']; $password = $_POST['password']; $email = $_POST['email']; mysql_select_db("`users`, $con) or die(mysql_error()"); $query = ("INSERT INTO `users`.`data` (`id`, `usrname`, `fname`, `lname`, `email`, `password`) VALUES (NULL, '$usrname', '$fname', '$lname', '$email', 'password'))"); mysql_query('$query') or die(mysql_error()); mysql_close($con); echo("Thank you for registering!"); I always get the error returned as: "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$query' at line 1. Help a newbie. I'm about to stab my monitor.

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Enforcing default time when only date in timestamptz provided

    - by Incognito
    Assume I have the table: postgres=# create table foo (datetimes timestamptz); CREATE TABLE postgres=# \d+ foo Table "public.foo" Column | Type | Modifiers | Storage | Description -----------+--------------------------+-----------+---------+------------- datetimes | timestamp with time zone | | plain | Has OIDs: no So lets insert some values into it... postgres=# insert into foo values ('2012-12-12'), --This is the value I want to catch for. (null), ('2012-12-12 12:12:12'), ('2012-12-12 12:12'); INSERT 0 4 And here's what we have: postgres=# select * from foo ; datetimes ------------------------ 2012-12-12 00:00:00+00 2012-12-12 12:12:12+00 2012-12-12 12:12:00+00 (4 rows) Ideally, I'd like to set up a default time-stamp value when a TIME is not provided with the input, rather than the de-facto time of 2012-12-12 being 00:00:00, I would like to set a default of 15:45:10. Meaning, my results should look like: postgres=# select * from foo ; datetimes ------------------------ 2012-12-12 15:45:10+00 --This one gets the default time. 2012-12-12 12:12:12+00 2012-12-12 12:12:00+00 (4 rows) I'm not really sure how to do this in postgres 8.4, I can't find anything in the datetime section of the manual or the sections regarding column default values.

    Read the article

  • Unable to save data in database manually and get latest auto increment id, cakePHP

    - by shabby
    I have checked this question as well and this one as well. I am trying to implement the model described in this question. What I want to do is, on the add function of message controller, create a record in thread table(this table only has 1 field which is primary key and auto increment), then take its id and insert it in the message table along with the user id which i already have, and then save it in message_read_state and thread_participant table. This is what I am trying to do in Thread Model: function saveThreadAndGetId(){ //$data= array('Thread' => array()); $data= array('id' => ' '); //Debugger::dump(print_r($data)); $this->save($data); debug('id: '.$this->id); $threadId = $this->getInsertID(); debug($threadId); $threadId = $this->getLastInsertId(); debug($threadId); die(); return $threadId; } $data= array('id' => ' '); This line from the above function adds a row in the thread table, but i am unable to retrieve the id. Is there any way I can get the id, or am I saving it wrongly? Initially I was doing the query thing in the message controller: $this->Thread->query('INSERT INTO threads VALUES();'); but then i found out that lastId function doesnt work on manual queries so i reverted.

    Read the article

  • Inversion of control domain objects construction problem

    - by Andrey
    Hello! As I understand IoC-container is helpful in creation of application-level objects like services and factories. But domain-level objects should be created manually. Spring's manual tells us: "Typically one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create/load domain objects." Well. But what if my domain "fine-grained" object depends on some application-level object. For example I have an UserViewer(User user, UserConstants constants) class. There user is domain object which cannot be injected, but UserViewer also needs UserConstants which is high-level object injected by IoC-container. I want to inject UserConstants from the IoC-container, but I also need a transient runtime parameter User here. What is wrong with the design? Thanks in advance! UPDATE It seems I was not precise enough with my question. What I really need is an example how to do this: create instance of class UserViewer(User user, UserService service), where user is passed as the parameter and service is injected from IoC. If I inject UserViewer viewer then how do I pass user to it? If I create UserViewer viewer manually then how do I pass service to it?

    Read the article

  • PHP/json My field names are being truncated to 30 characters. Can I stop this?

    - by Biff MaGriff
    Hi Everyone! Ok so I got this piece of vendor software that they said should be run on an apache php server and MySql database. I didn't have either of those so I put it on a PHP IIS server and I converted the code to work on SQL server. ex. mysql_select_db - mssql_select_db (among other things) So I have the following code in a php file $query = "SELECT * FROM TableName WHERE KEY_FIELD = '".$keyField."';"; $result = mssql_query($query); $arr = array(); while ( $obj = mssql_fetch_object($result) ) { $arr[] = $obj; } echo '{"results":'.json_encode($arr).'}'; and my results look something like this (captured with fiddler 2) {"results":[{"KEY_FIELD":"57", "My30characterlongfieldthatiscu":"GoodValue"}]} "My30characterlongfieldthatiscu" should be "My30characterlongfieldthatiscutoff" Kinda weird, no? The vendor claims that the app works perfectly on their end. I'm thinking this is some sort of IIS PHP limit, is there a way around it or can I expand it? I found this solution http://www.php.net/manual/en/ref.mssql.php#74834 but I don't understand it. Thanks!

    Read the article

  • What Are The Ways to Devide Text in Multi-Column? Need an Overview of Possibilities.

    - by Sam
    Hi folks, From source1 and source2 i gather that IE9 will NOT support multi-column css3!! Since it is still the most popular browser (another thing i cannot understand), i am left but no other choice than to use Programming Power to make multi-columns work. Now, I use three divs that float to left, and which are manually filled with text. Please don't laugh i know its stupid! But I would wish to not to have to worry about the columns and just have a one piece of (un-interrupted) text which all goes into only 1 div, and then have a program smart enough to split it up into X equally wide columns. Question: before i start reinvent the wheel, what methods of programming power have you known that tackle this elegantly? Please suggest your best working multi-column layout sources so I can evaluate which option is the best (I will update the below table). Exploring all possibilities 2011 and further, to enable multi column text user experience: Language Author SourceCodeUsage WorksOnAllMajorBrowser? ================================================================================= html manual labour put text manually in separate left-floating divs "Y" // Upside: control! Downside: few changes necessitates to reflow 3 divs manually! CSS3 w3c css3.info/preview/multi-column-layout/ "N" // {-moz-column-count: 3; -webkit-column-count: 3; } Thats all! javascript a list apart will add url soon ? // php ? ? ? //

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >