Search Results

Search found 19390 results on 776 pages for 'key bindings'.

Page 687/776 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • What is the fastest way to insert 100 000 records from one database to another?

    - by Pentium10
    I have a mobile application. My client has a large data set ~100.000 records. It's updated frequently. When we sync we need to copy from one database to another. I have attached the second database to the main, and run an insert into table select * from sync.table. This is extremely slow, it takes about 10 minutes I think. I noticed that the journal file gets increased step by step. How can I speed this up? EDITED 1 I have indexes off, and I have journal off. Using insert into table select * from sync.table it still takes 10 minutes. EDITED 2 If I run a query like select id,invitem,invid,cost from inventory where itemtype = 1 order by invitem limit 50 it takes 15-20 seconds. The table schema is: CREATE TABLE inventory ('id' INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 'serverid' INTEGER NOT NULL DEFAULT 0, 'itemtype' INTEGER NOT NULL DEFAULT 0, 'invitem' VARCHAR, 'instock' FLOAT NOT NULL DEFAULT 0, 'cost' FLOAT NOT NULL DEFAULT 0, 'invid' VARCHAR, 'categoryid' INTEGER DEFAULT 0, 'pdacategoryid' INTEGER DEFAULT 0, 'notes' VARCHAR, 'threshold' INTEGER NOT NULL DEFAULT 0, 'ordered' INTEGER NOT NULL DEFAULT 0, 'supplier' VARCHAR, 'markup' FLOAT NOT NULL DEFAULT 0, 'taxfree' INTEGER NOT NULL DEFAULT 0, 'dirty' INTEGER NOT NULL DEFAULT 1, 'username' VARCHAR, 'version' INTEGER NOT NULL DEFAULT 15 ) Indexes are created like CREATE INDEX idx_inventory_categoryid ON inventory (pdacategoryid); CREATE INDEX idx_inventory_invitem ON inventory (invitem); CREATE INDEX idx_inventory_itemtype ON inventory (itemtype); I am wondering, the insert into ... select * from isn't the fastest built-in way to do massive data copy? EDITED 3 SQLite is serverless, so please stop voting a particular answer, because that is not the answer I'm sure.

    Read the article

  • Updating the foreground of a label on window active property-WPF

    - by Deb
    I have a label which shows the name of the window. I want to update the colour of the label on the IsActive property of the window using styles and triggers so that all the labels inheriting this style should exhibit the same property. Please can anyone suggest me how? I tried like this: <Style TargetType="{x:Type Label}" x:Key="HeaderLabel"> <Style.Triggers> <DataTrigger Binding="{Binding (Window.IsActive)}" Value="True"> <Setter Property="FontSize" Value="15"/> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="FontFamily" Value="Arial"/> <Setter Property="Foreground" Value="Black"/> <Setter Property="HorizontalAlignment" Value="Left"/> </DataTrigger> <DataTrigger Binding="{Binding (Window.IsActive)}" Value="False"> <Setter Property="FontSize" Value="15"/> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="FontFamily" Value="Arial"/> <Setter Property="Foreground" Value="White"/> <Setter Property="HorizontalAlignment" Value="Left"/> </DataTrigger> </Style.Triggers> </Style>

    Read the article

  • Web services or shared database for (game) server communication?

    - by jaaronfarr
    We have 2 server clusters: the first is made up of typical web applications backed by SQL databases. The second are highly optimized multiplayer game servers which keep all data in memory. Both clusters communicate with clients via HTTP (Ajax with JSON). There are a few cases in which we need to share data between the two server types, for example, reporting back and storing the results of a game (should ultimately end up in the database). We're considering several approaches for inter-server communication: Just share the MySQL databases between clusters (introduce SQL to the game servers) Sharing data in a distributed key-value store like Memcache, Redis, etc. Use an RPC technology like Google ProtoBufs or Apache Thrift Using RESTful web services (the game server would POST back to the web servers, for example) At the moment, we're leaning towards web services or just sharing the database. Sharing the database seems easy, but we're concerned this adds extra memory and a new dependency into the game servers. Web services provide good separation of concerns and fit with the existing Ajax we use, but add complexity, overhead and many more ways for communication to fail. Are there any other good reasons not to use one or the other approach? Which would be easier to scale?

    Read the article

  • What is wrong with this Asynchronus task?

    - by bluebrain
    the method onPostExecute simply was not executed, I have seen 16 at LogCat but I can not see 16 in LogCAT. I tried to debug it, it seemed that it goes to the first line of the class (package line) after return statement. private class Client extends AsyncTask<Integer, Void, Integer> { protected Integer doInBackground(Integer... params) { Log.e(TAG,10+""); try { socket = new Socket(target, port); Log.e(TAG,11+""); oos = new ObjectOutputStream(socket.getOutputStream()); Log.e(TAG,14+""); ois = new ObjectInputStream(socket.getInputStream()); Log.e(TAG,15+""); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } Log.e(TAG,16+""); return 1; } protected void onPostExecute(Integer result) { Log.e(TAG,13+""); try { Log.e(TAG,12+""); oos.writeUTF(key); Log.e(TAG,13+""); if (ois.readInt() == OKAY) { isConnected = true; Log.e(TAG,14+""); }else{ Log.e(TAG,15+""); isConnected = false; } } catch (IOException e) { e.printStackTrace(); isClosed = true; } } }

    Read the article

  • How can I serialize functions using JSON or some other serialization library?

    - by Oragamster
    I am trying to create a program that uses javascript to write a simple textadventure that I can then post on my blog and run on my iphone. I have run into a problem though. I was trying to make it so that my program would save it's state into cookies using JSON to convert it into strings and then post it into a cookie but then I realised that I couldn't serialize the functions that are on my item object. I was trying to make it so that my item would have an associative array that would contain the name of the use as the key and the function as the value. This worked well untill I tried to serialize it. I learned that I could create a JSON like serialization for functions by storing the body into a string and using escape charectors for the double quotes but for some reason I was unable to make my cookie with the function as the string stored. When I posted the cookie and then tried to get it back the string wasn't there. My code and the over all project are on my site if you want to look at that, though my full code including the item actions are not posted yet.

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Using map() on a _set in a template?

    - by Stuart Grimshaw
    I have two models like this: class KPI(models.Model): """KPI model to hold the basic info on a Key Performance Indicator""" title = models.CharField(blank=False, max_length=100) description = models.TextField(blank=True) target = models.FloatField(blank=False, null=False) group = models.ForeignKey(KpiGroup) subGroup = models.ForeignKey(KpiSubGroup, null=True) unit = models.TextField(blank=True) owner = models.ForeignKey(User) bt_measure = models.BooleanField(default=False) class KpiHistory(models.Model): """A historical log of previous KPI values.""" kpi = models.ForeignKey(KPI) measure = models.FloatField(blank=False, null=False) kpi_date = models.DateField() and I'm using RGraph to display the stats on internal wallboards, the handy thing is Python lists get output in a format that Javascript sees as an array, so by mapping all the values into a list like this: def f(x): return float(x.measure) stats = map(f, KpiHistory.objects.filter(kpi=1) then in the template I can simply use {{ stats }} and the RGraph code sees it as an array which is exactly what I want. [87.0, 87.5, 88.5, 90] So my question is this, is there any way I can achieve the same effect using Django's _set functionality to keep the amount of data I'm passing into the template, up until now I've been passing in a single KPI object to be graphed but now I want to pass in a whole bunch so is there anything I can do with _set {{ kpi.kpihistory_set }} dumps the whole model out, but I just want the measure field. I can't see any of the built in template methods that will let me pull out just the single field I want. How have other people handled this situation?

    Read the article

  • Always send certain values with an ajax request/post

    - by DZittersteyn
    I'm building a system that displays a list of user, and on selection of a user requests some form of password. These values are saved in a hidden field on the page, and need to be sent with every request as a form of authentication. (I'm aware of the MITM-vulnerability that lies herein, but it's a very low-key system, so security is not a large concern). Now I need to send these values with each and every request, to auth the currently 'logged in' user. I'd like to automate this, via ajaxSetup, however i'm running into some issues. My first try was: init_user_auth: function(){ $.ajaxSetup({ data: { 'user' : site_user.selected_user_id(), 'passcode': site_user.selected_user_pc(), 'barcode' : site_user.selected_user_bc() } }); }, However, as I should have known, this reads the values once, at the time of the call to ajaxSetup, and never rereads them. What I need is a way to actually call the functions every time an ajax-call is made. I'm currently trying to understand what is happening here: https://groups.google.com/forum/?fromgroups=#!topic/jquery-dev/OBcEfgvTJ9I, however through the flamewar and very low-level stuff going on there, I'm not exactly sure I get what is going on. Is this the way to proceed, or should I just face facts and manually add login-info to each ajax-call?

    Read the article

  • I've registered my oath about 5 times now, but... (twitteR package R)

    - by user2985989
    I'm attempting to mine twitter data in R, and am having trouble getting started. I created a twitter account, an app in twitter developers, changed the settings to read, write, and access, created my access token, and followed instructions to the letter in registering it: My code: > library(twitteR) > download.file(url="http://curl.haxx.se/ca/cacert.pem", + destfile="cacert.pem") > requestURL <- "https://api.twitter.com/oauth/request_token" > accessURL <- "https://api.twitter.com/oauth/access_token" > authURL <- "https://api.twitter.com/oauth/authorize" > consumerKey <-"my key" #took this part out for privacy's sake > consumerSecret <- "my secret" #this too > twitCred <- OAuthFactory$new(consumerKey=consumerKey, consumerSecret = consumerSecret, requestURL = requestURL, accessURL = accessURL, authURL = authURL) > twitCred$handshake(cainfo="cacert.pem") To enable the connection, please direct your web browser to: https://api.twitter.com/oauth/authorize?oauth_token=zxgHXJkYAB3wQ2IVAeyJjeyid7WK6EGPfouGmlx1c When complete, record the PIN given to you and provide it here: 0010819 > registerTwitterOAuth(twitCred) [1] TRUE > save(list="twitCred", file="twitteR_credentials") And yet, this: > s <- searchTwitter('#United', cainfo="cacert.pem") [1] "Unauthorized" Error in twInterfaceObj$doAPICall(cmd, params, "GET", ...) : Error: Unauthorized I'm about to have a temper tantrum. I'd be extremely grateful if someone could explain to me what is going wrong, or, better yet, how to fix it. Thank you.

    Read the article

  • How to reset keyboard for an entry field?

    - by David.Chu.ca
    I am using tag field as a flag for text fields text view fields for auto-jumping to the next field: - (BOOL)findNextEntryFieldAsResponder:(UIControl *)field { BOOL retVal = NO; for (UIView* aView in mEntryFields) { if (aView.tag == (field.tag + 1)) { [aView becomeFirstResponder]; retVal = YES; break; } } return retVal; } It works fine in terms of auto-jumping to the next field when Next key is pressed. However, my case is that the keyboards are different some fields. For example, one fields is numeric & punctuation, and the next one is default (alphabetic keys). For the numeric & punctuation keyboard is OK, but the next field will stay as the same layout. It requires user to press 123 to go back ABC keyboard. I am not sure if there is any way to reset the keyboard for a field as its keyboard defined in xib? Not sure if there is any APIs available? I guess I have to do something is the following delegate? -(void)textFieldDidBegingEditing:(UITextField*) textField { // reset to the keyboard to request specific keyboard view? .... } OK. I found a solution close to my case by slatvik: -(void) textFieldDidBeginEditing:(UITextField*) textField { textField.keyboardType = UIKeybardTypeAlphabet; } However, in the case of the previous text fields is numeric, the keyboard stays numeric when auto-jumped to the next field. Is there any way to set keyboard to alphabet mode?

    Read the article

  • Syntax for finding structs in multisets - C++

    - by Sarah
    I can't seem to figure out the syntax for finding structs in containers. I have a multiset of Event structs. I'm trying to find one of these structs by searching on its key. I get the compiler error commented below. struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } bool operator > ( const Event & rhs ) const { return ( time > rhs.time ); } bool operator == ( const Event & rhs ) const { return ( time == rhs.time ); } double time; int eventID; int hostID; int s; }; typedef std::multiset< Event, std::less< Event > > EventPQ; EventPQ currentEvents; double oldRecTime = 20.0; EventPQ::iterator ceItr = currentEvents.find( EventPQ::key_type( oldRecTime ) ); // no matching function call I've tried a few permutations to no avail. I thought defining the conditional equality operator was going to be enough.

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Automatically update php loop with data pulled from database

    - by John Svensson
    SQL STRUCTURE CREATE TABLE IF NOT EXISTS `map` ( `id` int(11) NOT NULL AUTO_INCREMENT, `x` int(11) NOT NULL, `y` int(11) NOT NULL, `type` varchar(50) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; http://localhost/map.php?x=0&y=0 When I update the x and y via POST or GET, I would like to pull the new data from the database without refreshing the site, how would I manage that? Could someone give me some examples, because I am really stuck here. <?php mysql_connect('localhost', 'root', ''); mysql_select_db('hol'); $startX = $_GET['x']; $startY = $_GET['y']; $fieldHeight = 6; $fieldWidth = 6; $sql = "SELECT id, x, y, type FROM map WHERE x BETWEEN ".$startX." AND ".($startX+$fieldWidth). " AND y BETWEEN ".$startY." AND ".($startY+$fieldHeight); $result = mysql_query($sql); $positions = array(); while ($row = mysql_fetch_assoc($result)) { $positions[$row['x']][$row['y']] = $row; } echo "<table>"; for($y=$startY; $y<$startY+$fieldHeight; $y++) { echo "<tr>"; for($x=$startX; $x<$startX+$fieldWidth; $x++) { echo "<td>"; if(isset($positions[$x][$y])) { echo $positions[$x][$y]['type']; } else { echo "(".$x.",".$y.")"; } echo "</td>"; } echo "</tr>"; } echo "</table>"; ?>

    Read the article

  • C++ STL question related to insert iterators and overloaded operators

    - by rshepherd
    #include <list> #include <set> #include <iterator> #include <algorithm> using namespace std; class MyContainer { public: string value; MyContainer& operator=(const string& s) { this->value = s; return *this; } }; int main() { list<string> strings; strings.push_back("0"); strings.push_back("1"); strings.push_back("2"); set<MyContainer> containers; copy(strings.begin(), strings.end(), inserter(containers, containers.end())); } The preceeding code does not compile. In standard C++ fashion the error output is verbose and difficult to understand. The key part seems to be this... /usr/include/c++/4.4/bits/stl_algobase.h:313: error: no match for ‘operator=’ in ‘__result.std::insert_iterator::operator* [with _Container = std::set, std::allocator ]() = __first.std::_List_iterator::operator* [with _Tp = std::basic_string, std::allocator ]()’ ...which I interpet to mean that the assignment operator needed is not defined. I took a look at the source code for insert_iterator and noted that it has overloaded the assignment operator. The copy algorithm must uses the insert iterators overloaded assignment operator to do its work(?). I guess that because my input iterator is on a container of strings and my output iterator is on a container of MyContainers that the overloaded insert_iterator assignment operator can no longer work. This is my best guess, but I am probably wrong. So, why exactly does this not work and how can I accomplish what I am trying to do?

    Read the article

  • lambda expressions in VB.NET... what am I doing wrong???

    - by Bob
    when I run this C# code, no problems... but when I translate it into VB.NET it compiles but blows due to 'CompareString' member not being allowed in the expression... I feel like I'm missing something key here... private void PrintButton_Click(object sender, EventArgs e) { if (ListsListBox.SelectedIndex > -1) { //Context using (ClientOM.ClientContext ctx = new ClientOM.ClientContext(UrlTextBox.Text)) { //Get selected list string listTitle = ListsListBox.SelectedItem.ToString(); ClientOM.Web site = ctx.Web; ctx.Load(site, s => s.Lists.Where(l => l.Title == listTitle)); ctx.ExecuteQuery(); ClientOM.List list = site.Lists[0]; //Get fields for this list ctx.Load(list, l => l.Fields.Where(f => f.Hidden == false && (f.CanBeDeleted == true || f.InternalName == "Title"))); ctx.ExecuteQuery(); //Get items for the list ClientOM.ListItemCollection listItems = list.GetItems( ClientOM.CamlQuery.CreateAllItemsQuery()); ctx.Load(listItems); ctx.ExecuteQuery(); // DOCUMENT CREATION CODE GOES HERE } MessageBox.Show("Document Created!"); } } but in VB.NET code this errors due to not being allowed 'CompareString' members in the ctx.Load() methods... Private Sub PrintButton_Click(sender As Object, e As EventArgs) If ListsListBox.SelectedIndex > -1 Then 'Context Using ctx As New ClientOM.ClientContext(UrlTextBox.Text) 'Get selected list Dim listTitle As String = ListsListBox.SelectedItem.ToString() Dim site As ClientOM.Web = ctx.Web ctx.Load(site, Function(s) s.Lists.Where(Function(l) l.Title = listTitle)) ctx.ExecuteQuery() Dim list As ClientOM.List = site.Lists(0) 'Get fields for this list ctx.Load(list, Function(l) l.Fields.Where(Function(f) f.Hidden = False AndAlso (f.CanBeDeleted = True OrElse f.InternalName = "Title"))) ctx.ExecuteQuery() 'Get items for the list Dim listItems As ClientOM.ListItemCollection = list.GetItems(ClientOM.CamlQuery.CreateAllItemsQuery()) ctx.Load(listItems) ' DOCUMENT CREATION CODE GOES HERE ctx.ExecuteQuery() End Using MessageBox.Show("Document Created!") End If End Sub

    Read the article

  • Spreadsheet_Excel_Writer large data output is damaged

    - by dr3w
    I use Spreadsheet_Excel_Writer to generate .xls file and it works fine until I have to deal with a large amount of data. On certain stage it just writes some nonsense chars and quits filling certain columns. However some columns are field up to the end (generally numeric data) I'm not quite sure how the xls document is formed: row by row, or col by col... Also it is obviously not an error in a string, because when i cut out some data, the error appears a little bit further. I think there is no need in all of my code here are some essentials $filename = 'file.xls'; $workbook = & new Spreadsheet_Excel_Writer(); $workbook->setVersion(8); $contents =& $workbook->addWorksheet('Logistics'); $contents->setInputEncoding('UTF-8'); $workbook->send($filename); //here is the part where I write data down $contents->write(0, 0, 'Field A'); $contents->write(0, 1, 'Field B'); $contents->write(0, 2, 'Field C'); $ROW=1; foreach($ordersArr as $key=>$val){ $contents->write($ROW, 0, $val['a']); $contents->write($ROW, 1, $val['b']); $contents->write($ROW, 2, $val['c']); $ROW++; } $workbook->close();

    Read the article

  • Will C++0x support __stdcall or extern "C" capture-nothing lambdas?

    - by Daniel Trebbien
    Yesterday I was thinking about whether it would be possible to use the convenience of C++0x lambda functions to write callbacks for Windows API functions. For example, what if I wanted to use a lambda as an EnumChildProc with EnumChildWindows? Something like: EnumChildWindows(hTrayWnd, CALLBACK [](HWND hWnd, LPARAM lParam) { // ... return static_cast<BOOL>(TRUE); // continue enumerating }, reinterpret_cast<LPARAM>(&myData)); Another use would be to write extern "C" callbacks for C routines. E.g.: my_class *pRes = static_cast<my_class*>(bsearch(&key, myClassObjectsArr, myClassObjectsArr_size, sizeof(my_class), extern "C" [](const void *pV1, const void *pV2) { const my_class& o1 = *static_cast<const my_class*>(pV1); const my_class& o2 = *static_cast<const my_class*>(pV2); int res; // ... return res; })); Is this possible? I can understand that lambdas that capture variables will never be compatible with C, but it at least seems possible to me that capture-nothing lambdas can be compatible.

    Read the article

  • Help on MySQL table indexing when GROUP BY is used in a query

    - by Silver Light
    Thank you for your attention. There are two INNODB tables: Table authors id INT nickname VARCHAR(50) status ENUM('active', 'blocked') about TEXT Table books author_id INT title VARCHAR(150) I'm running a query against these tables, to get each author and a count of books he has: SELECT a. * , COUNT( b.id ) AS book_count FROM authors AS a, books AS b WHERE a.status != 'blocked' AND b.author_id = a.id GROUP BY a.id ORDER BY a.nickname This query is very slow (takes about 6 seconds to execute). I have an index on books.author_id and it works perfectly, but I do not know how to create an index on authors table, so that this query could use it. Here is how current EXPLAIN looks: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE a ALL PRIMARY,id_status_nickname NULL NULL NULL 3305 Using where; Using temporary; Using filesort 1 SIMPLE b ref key_author_id key_author_id 5 a.id 2 Using where; Using index I've looked at MySQL manual on optimizing queries with group by, but could not figure out how I can apply it on my query. I'll appreciate any help and hints on this - what must be the index structure, so that MySQL could use it?

    Read the article

  • Some ASP.NET and Access

    - by Fazleh
    Good Day all, I have a big problem but i think its minor for you guys in Stackoverflow. I am creating a web application that has two main parts. The Payment part and Requisition part. It backbone is using access and the script is in ASP.NET. I managed to sort out most of the application. But I have been having a few problems. I have pasted the link to the project in http://www.mediafire.com/download/p09fefreifidud3/Inyatsi.rar so it will be easy for someone to see what I am blabbing about. Now for my problems: The AddRequisition.aspx/AddPayment.aspx: both have a reference number. I wanted it to be unique number(but not a primary key). I wanted it to be in the following format: DDMMYY(TransactionNo)(UserID) eg: 24061101PK. I have tried and tried but have not been able to sort it out. The AmountINWords gets the value from Amount. It converts the Amount into words. Thats not all. It picks what currncy was picked in the CurrencyPaidIn and pust the respective currency inside. eg. 123.45 USD becomes One Hundred and twenty three dollars and forty five cents. I tried using queries but as you will see that went all wrong. Those are the only two things that I cant seem to get my head around. I do know that there are some things that are not conventional ASP.NET and some text boxes are not the right size. I was thinking of sorting out those after I get those two fixed because they are simple to do. I really need some help with this application please. If someone can just have a look at the code and add a few things here and there. Thanks in advance. Faz

    Read the article

  • PHP 5.3 and interface \ArrayAccess

    - by Jakub Lédl
    I'm now working on a project and I have one class that implements the ArrayAccess interface. Howewer, I'm getting an error that says that my implementation: must be compatible with that of ArrayAccess::offsetSet(). My implementation looks like this: public function offsetSet($offset, $value) { if (!is_string($offset)) { throw new \LogicException("..."); } $this->params[$offset] = $value; } So, to me it looks like my implementation is correct. Any idea what is wrong? Thanks very much! The class look like this: class HttpRequest implements \ArrayAccess { // tons of private variables, methods for working // with current http request etc. Really nothing that // could interfere with that interface. // ArrayAccess implementation public function offsetExists($offset) { return isset ($this->params[$offset]); } public function offsetGet($offset) { return isset ($this->params[$offset]) ? $this->params[$offset] : NULL; } public function offsetSet($offset, $value) { if (!is_string($offset)) { throw new \LogicException("You can only assing to params using specified key."); } $this->params[$offset] = $value; } public function offsetUnset($offset) { unset ($this->params[$offset]); } }

    Read the article

  • DataTrigger not reevaluating after property changes

    - by frozen
    I have a listbox which has its itemssource (this is done in the code behind on as the window is created) databound to an observable collection. The List box then has the following data template assigned against the items: usercontrol.xaml ... <ListBox x:Name="communicatorListPhoneControls" ItemContainerStyle="{StaticResource templateForCalls}"/> ... app.xaml ... <Style x:Key="templateForCalls" TargetType="{x:Type ListBoxItem}"> <Setter Property="ContentTemplate" Value="{StaticResource templateRinging}"/> <Style.Triggers> <DataTrigger Binding="{Binding Path=hasBeenAnswered}" Value="True"> <Setter Property="ContentTemplate" Value="{StaticResource templateAnswered}"/> </DataTrigger> </Style.Triggers> </Style> ... When the observable collection is updated with an object, this appears in the listbox with the correct initial datatemplate, however when the "hasBeenAnswered" property is set to true (when debugging i can see the collection is correct) the datatrigger does not re-evaluate and then update the listbox to use the correct data template. I have implemented the INotifyPropertyChanged Event in my object, and if in the template i bind to a value, i can see the value update. Its just that the datatrigger will not re-evaluate and change to the correct template. I know the datatrigger binding is correct because if i close the window and open it again, it will correctly apply the second datatemplate, because the "hasBeenAnswered" is set to True.

    Read the article

  • C++ STL question related to insert iterators and sets

    - by rshepherd
    #include #include #include #include using namespace std; class MyContainer { public: string value; MyContainer& operator=(const string& s) { this->value = s; return *this; } }; int main() { list<string> strings; strings.push_back("0"); strings.push_back("1"); strings.push_back("2"); set<MyContainer> containers; copy(strings.begin(), strings.end(), inserter(containers, containers.end())); } The preceeding code does not compile. In typical STL style the error output is verbose and difficult to understand. The key part seems to be this... /usr/include/c++/4.4/bits/stl_algobase.h:313: error: no match for ‘operator=’ in ‘__result.std::insert_iterator::operator* [with _Container = std::set, std::allocator ]() = __first.std::_List_iterator::operator* [with _Tp = std::basic_string, std::allocator ]()’ ...which I interpet to mean that the assignment operator needed is not defined. I took a look at the source code for insert_iterator and noted that it has overloaded the assignment operator. The copy algorithm must uses the insert iterators overloaded assignment operator to do its work(?). I guess that because my input iterator is on a container of strings and my output iterator is on a container of MyContainers that the overloaded insert_iterator assignment operator can no longer work. This is my best guess, but I am probably wrong. So, why exactly does this not work and how can I accomplish what I am trying to do?

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >