Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 331/415 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Are endless loops in bad form?

    - by rlbond
    So I have some C++ code for back-tracking nodes in a BFS algorithm. It looks a little like this: typedef std::map<int> MapType; bool IsValuePresent(const MapType& myMap, int beginVal, int searchVal) { int current_val = beginVal; while (true) { if (current_val == searchVal) return true; MapType::iterator it = myMap.find(current_val); assert(current_val != myMap.end()); if (current_val == it->second) // end of the line return false; current_val = it->second; } } However, the while (true) seems... suspicious to me. I know this code works, and logically I know it should work. However, I can't shake the feeling that there should be some condition in the while, but really the only possible one is to use a bool variable just to say if it's done. Should I stop worrying? Or is this really bad form. EDIT: Thanks to all for noticing that there is a way to get around this. However, I would still like to know if there are other valid cases.

    Read the article

  • Looking for a workaround for IE 6/7 "Unspecified Error" bug when accessing offsetParent; using ASP.N

    - by CodeChef
    I'm using jQuery UI's draggable and droppable libraries in a simple ASP.NET proof of concept application. This page uses the ASP.NET AJAX UpdatePanel to do partial page updates. The page allows a user to drop an item into a trashcan div, which will invoke a postback that deletes a record from the database, then rebinds the list (and other controls) that the item was drug from. All of these elements (the draggable items and the trashcan div) are inside an ASP.NET UpdatePanel. Here is the dragging and dropping initialization script: function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } When the page posts back, partially updating the UpdatePanel's content, I rebind the draggables and droppables. When I then grab a draggable with my cursor, I get an "htmlfile: Unspecified error." exception. I can resolve this problem in the jQuery library by replacing elem.offsetParent with calls to this function that I wrote: function IESafeOffsetParent(elem) { try { return elem.offsetParent; } catch(e) { return document.body; } } I also have to avoid calls to elem.getBoundingClientRect() as it throws the same error. For those interested, I only had to make these changes in the jQuery.fn.offset function in the Dimensions Plugin. My questions are: Although this works, are there better ways (cleaner; better performance; without having to modify the jQuery library) to solve this problem? If not, what's the best way to manage keeping my changes in sync when I update the jQuery libraries in the future? For, example can I extend the library somewhere other than just inline in the files that I download from the jQuery website. Update: @some It's not publicly accessible, but I will see if SO will let me post the relevant code into this answer. Just create an ASP.NET Web Application (name it DragAndDrop) and create these files. Don't forget to set Complex.aspx as your start page. You'll also need to download the jQuery UI drag and drop plug in as well as jQuery core Complex.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Complex.aspx.cs" Inherits="DragAndDrop.Complex" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> <script src="jquery-1.2.6.min.js" type="text/javascript"></script> <script src="jquery-ui-personalized-1.5.3.min.js" type="text/javascript"></script> <script type="text/javascript"> function initDragging() { $(".person").draggable({helper:'clone'}); $("#trashcan").droppable({ accept: '.person', tolerance: 'pointer', hoverClass: 'trashcan-hover', activeClass: 'trashcan-active', drop: onTrashCanned }); } $(document).ready(function(){ initDragging(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(function() { initDragging(); }); }); function onTrashCanned(e,ui) { var id = $('input[id$=hidID]', ui.draggable).val(); if (id != undefined) { $('#hidTrashcanID').val(id); __doPostBack('btnTrashcan',''); } } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <asp:UpdatePanel ID="updContent" runat="server" UpdateMode="Always"> <ContentTemplate> <asp:LinkButton ID="btnTrashcan" Text="trashcan" runat="server" CommandName="trashcan" onclick="btnTrashcan_Click" style="display:none;"></asp:LinkButton> <input type="hidden" id="hidTrashcanID" runat="server" /> <asp:Button ID="Button1" runat="server" Text="Save" onclick="Button1_Click" /> <table> <tr> <td style="width: 300px;"> <asp:DataList ID="lstAllPeople" runat="server" DataSourceID="odsAllPeople" DataKeyField="ID"> <ItemTemplate> <div class="person"> <asp:HiddenField ID="hidID" runat="server" Value='<%# Eval("ID") %>' /> Name: <asp:Label ID="lblName" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </div> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsAllPeople" runat="server" SelectMethod="SelectAllPeople" TypeName="DragAndDrop.Complex+DataAccess" onselecting="odsAllPeople_Selecting"> <SelectParameters> <asp:Parameter Name="filter" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> <td style="width: 300px;vertical-align:top;"> <div id="trashcan"> drop here to delete </div> <asp:DataList ID="lstPeopleToDelete" runat="server" DataSourceID="odsPeopleToDelete"> <ItemTemplate> ID: <asp:Label ID="IDLabel" runat="server" Text='<%# Eval("ID") %>' /> <br /> Name: <asp:Label ID="NameLabel" runat="server" Text='<%# Eval("Name") %>' /> <br /> <br /> </ItemTemplate> </asp:DataList> <asp:ObjectDataSource ID="odsPeopleToDelete" runat="server" onselecting="odsPeopleToDelete_Selecting" SelectMethod="GetDeleteList" TypeName="DragAndDrop.Complex+DataAccess"> <SelectParameters> <asp:Parameter Name="list" Type="Object" /> </SelectParameters> </asp:ObjectDataSource> </td> </tr> </table> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> Complex.aspx.cs namespace DragAndDrop { public partial class Complex : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected List<int> DeleteList { get { if (ViewState["dl"] == null) { List<int> dl = new List<int>(); ViewState["dl"] = dl; return dl; } else { return (List<int>)ViewState["dl"]; } } } public class DataAccess { public IEnumerable<Person> SelectAllPeople(IEnumerable<int> filter) { return Database.SelectAll().Where(p => !filter.Contains(p.ID)); } public IEnumerable<Person> GetDeleteList(IEnumerable<int> list) { return Database.SelectAll().Where(p => list.Contains(p.ID)); } } protected void odsAllPeople_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["filter"] = this.DeleteList; } protected void odsPeopleToDelete_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["list"] = this.DeleteList; } protected void Button1_Click(object sender, EventArgs e) { foreach (int id in DeleteList) { Database.DeletePerson(id); } DeleteList.Clear(); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } protected void btnTrashcan_Click(object sender, EventArgs e) { int id = int.Parse(hidTrashcanID.Value); DeleteList.Add(id); lstAllPeople.DataBind(); lstPeopleToDelete.DataBind(); } } } Database.cs namespace DragAndDrop { public static class Database { private static Dictionary<int, Person> _people = new Dictionary<int,Person>(); static Database() { Person[] people = new Person[] { new Person("Chad") , new Person("Carrie") , new Person("Richard") , new Person("Ron") }; foreach (Person p in people) { _people.Add(p.ID, p); } } public static IEnumerable<Person> SelectAll() { return _people.Values; } public static void DeletePerson(int id) { if (_people.ContainsKey(id)) { _people.Remove(id); } } public static Person CreatePerson(string name) { Person p = new Person(name); _people.Add(p.ID, p); return p; } } public class Person { private static int _curID = 1; public int ID { get; set; } public string Name { get; set; } public Person() { ID = _curID++; } public Person(string name) : this() { Name = name; } } }

    Read the article

  • Emacs Lisp: how to set encoding for call-process

    - by RamyenHead
    I thought I knew how to set coding-system (or encoding): use process-coding-system-alist. Apparently, it's not working. ;; -*- coding: utf-8 -*- (require 'cl) (let ((process-coding-system-alist '("cygwin/bin/bash" . (utf-8-dos . utf-8-unix)))) (setq my-words (list "Lilo" "?_?" "_?" "?_" "?" "Stitch") my-cygwin-bash "C:/cygwin/bin/bash.exe" my-outbuf (get-buffer-create "*my cygwin bash echo test*") ) (with-current-buffer my-outbuf (goto-char (point-max)) (loop for word in my-words do (insert (concat "echo " word "\n")) (call-process my-cygwin-bash nil my-outbuf nil "-c" (concat "echo " word))) ) (display-buffer my-outbuf) ) Running the above code, the output is this: echo Lilo Lilo echo ?_? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo _? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo ?_ /usr/bin/bash: $'echo \346\267\205?': command not found echo ? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo Stitch Stitch Anything sent to cygwin in unicode is failing (MS Windows, Korean).

    Read the article

  • QTcpServer not emiting signals

    - by Timothy Baldridge
    Okay, I'm sure this is simple, but I'm not seeing it: HTTPServer::HTTPServer(QObject *parent) : QTcpServer(parent) { connect(this, SIGNAL(newConnection()), this, SLOT(acceptConnection())); } void HTTPServer::acceptConnection() { qDebug() << "Got Connection"; QTcpSocket *clientconnection = this->nextPendingConnection(); connect(clientconnection, SIGNAL(disconnected()), clientconnection, SLOT(deleteLater())); HttpRequest *req = new HttpRequest(clientconnection, this); req->processHeaders(); delete req; } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); HTTPServer http(0); http.listen(QHostAddress::Any, 8011); qDebug() << "Started: " << http.isListening() << http.serverAddress() << ":" << http.serverPort(); return a.exec(); } According to the docs my acceptConnection() slot should be called whenever there is a new connection. I can connect into this tcp port with a browser or telnet, and I don't get any errors, so I know it's listening, execution never goes to my acceptConnection() function? And yes my objects inherit from QObject, I've just stripped the code down to the essential parts above. There's no build errors....

    Read the article

  • json_encode with mysql content and umlauts in utf-8

    - by i3rutus
    Hey, i feel my beard growing while trying to find out the Problem here. Basic the Problem is, that Umlauts/Special Signs äöß ... don't work. I guess everyone is sick and tired of that questions but all the solutions found online don't seem to work. Im having utf-8 content in a utf-8 Mysql Database. I feel the Problem ist somewhere in the Database connection but i just can't figure out. character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 Im not sure if the problem is the latin1 for character_set_server because im not into that mysql stuff. I also dont know how to change cause i can't access the mysql server's config files. Whatever is confusing me, that if i get my results from the Database and echo it, print_r gives the right result. ini_set('default_charset','utf-8'); header('Content-Type: text/plain; > charset=utf-8'); Firefox says char encode is utf-8 but if when i output: print_r($listnew); echo json_encode($listnew[5]); print_r results everything right but json_encode does wrong. print_r: [5] => Array ( [id] => 5 [data] => U-Bahnhof Theresienstraße [size] => 17 ) json_encode: {"id":5,"data":"U-Bahnhof Theresienstra\u00dfe","size":17} i know json_encode needs a utf-8 string to work properly there and i feel im having a encode trouble here but i just can't firgure out where it is. Any help would be appreciated, thanks in advance. i3

    Read the article

  • Excel vba -get ActiveX Control checkbox when event handler is triggered

    - by danoran
    I have an excel spreadsheet that is separated into different sections with named ranges. I want to hide a named range when a checkbox is clicked. I can do this for one checkbox, but I would like to have a single function that can hide the appropriate section based on the calling checkbox. I was planning on calling that function from the event_handlers for when the checkboxes are clicked, and to pass the checkbox as an argument. Is there a way to access the checkbox object that calls the event handler? This works: Sub chkDogsInContest_Click() ActiveSheet.Names("DogsInContest").RefersToRange.EntireRow.Hidden = Not chkMemberData.Value End Sub But this is what I would like to do: Sub chkDogsInContest_Click() Module1.Show_Hide_Section (<calling checkbox>) End Sub These functions are defined in a different module: 'The format for the the names of the checkbox controls is 'CHECKBOX_NAME_PREFIX + <name> 'where "name" is also the name of the associated Named Range Public Const CHECKBOX_NAME_PREFIX As String = "chk" 'The format for the the names of the checkbox controls is 'CHECKBOX_NAME_PREFIX + <name> 'where "name" is also the name of the associated Named Range Public Function CheckName_To_SectionName(ByRef strCheckName As String) CheckName_To_SectionName = Mid(strCheckName, CHECKBOX_NAME_PREFIX.Length() + 1) End Function Public Sub Show_Hide_Section(ByRef chkBox As CheckBox) ActiveSheet.Names(CheckName_To_SectionName(chkBox.Name())).RefersTo.EntireRow.Hidden = True End Sub

    Read the article

  • Memory leak while asynchronously loading BitmapSource images

    - by harry
    I have a fair few images that I'm loading into a ListBox in my WPF application. Originally I was using GDI to resize the images (the originals take up far too much memory). That was fine, except they were taking about 400ms per image. Not so fine. So in search of another solution I found a method that uses TransformedBitmap (which inherits from BitmapSource). That's great, I thought, I can use that. Except I'm now getting memory leaks somewhere... I'm loading the images asynchronously using a BackgroundWorker like so: BitmapSource bs = ImageUtils.ResizeBitmapSource(ImageUtils.GetImageSource(photo.FullName)); //BitmapSource bs = ImageUtils.GetImageSource(photo.FullName); bs.Freeze(); this.dispatcher.Invoke(new Action(() => { photo.Source = bs; })); GetImageSource just gets the Bitmap from the path and then converts to BitmapSource. Here's the code snippet for ResizeBitmapSource: const int thumbnailSize = 200; int width; int height; if (bs.Width > bs.Height) { width = thumbnailSize; height = (int)(bs.Height * thumbnailSize / bs.Width); } else { height = thumbnailSize; width = (int)(bs.Width * thumbnailSize / bs.Height); } BitmapSource tbBitmap = new TransformedBitmap(bs, new ScaleTransform(width / bs.Width, height / bs.Height, 0, 0)); return tbBitmap; That code is essentially the code from: http://rongchaua.net/blog/c-wpf-fast-image-resize/ Any ideas what could be causing the leak?

    Read the article

  • BN_hex2bn magicaly segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Finding perfect numbers in C# (optimization)

    - by paradox
    I coded up a program in C# to find perfect numbers within a certain range as part of a programming challenge . However, I realized it is very slow when calculating perfect numbers upwards of 10000. Are there any methods of optimization that exist for finding perfect numbers? My code is as follows: using System; using System.Collections.Generic; using System.Linq; namespace ConsoleTest { class Program { public static List<int> FindDivisors(int inputNo) { List<int> Divisors = new List<int>(); for (int i = 1; i<inputNo; i++) { if (inputNo%i==0) Divisors.Add(i); } return Divisors; } public static void Main(string[] args) { const int limit = 100000; List<int> PerfectNumbers = new List<int>(); List<int> Divisors=new List<int>(); for (int i=1; i<limit; i++) { Divisors = FindDivisors(i); if (i==Divisors.Sum()) PerfectNumbers.Add(i); } Console.Write("Output ="); for (int i=0; i<PerfectNumbers.Count; i++) { Console.Write(" {0} ",PerfectNumbers[i]); } Console.Write("\n\n\nPress any key to continue . . . "); Console.ReadKey(true); } } }

    Read the article

  • Correctly assigning value to a Core Data attribute with an integer data-type

    - by Gordon Fontenot
    I'm missing something here, and feeling like an idiot about it. I'm using a UIPickerView in my app, and I need to assign the row number to a 32-bit integer attribute for a Core Data object. To do this, I am using this method: -(void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { object.integerValue = row; } This is giving me a warning: warning: passing argument 1 of 'setIntegerValue:' makes pointer from integer without a cast What am I mixing up here? --Edit 1-- Ok, so I can get rid of the warning by changing the method to do the following: NSNumber *number = [NSNumber numberWithInteger:row]; object.integerValue = rating; However, I still get a value of 0 for object.integerValue if I use NSLog to print it out. object.integerValue has a max value of 5, so I print out number instead, and then I'm getting a number above 62,000,000. Which doesn't seem right to me, since there are 5 rows. If I NSLog the row variable, I get a number between 0 and 5. So why do I end up with a completely different number after casting the number to NSNumber? --Edit 2-- Ok, so I'm realizing that there is some fundamental idea that I don't understand. I now understand that the 60 million + number can be cast back to the correct 0-5 number by using integerValue. So, it seems my question is how can I save an integer between 0-5 to the attribute if the NSNumber that is returned is over 60 million? Do I need to be using a different data type?

    Read the article

  • Qt: Force QWebView to click on a web element, even one not visible on the window

    - by Pirate for Profit
    So let's say I'm trying to click a link in the QWebView, here is what I have: // extending QWebView void MyWebView::click(const QString &selectorQuery) { QWebElement el = this->page()->mainFrame()->findFirstElement(selectorQuery); if (!el) return; el.setFocus(); QMouseEvent pressEvent(QMouseEvent::MouseButtonPress, el.geometry().center(), Qt::MouseButton::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(this, &pressEvent); QMouseEvent releaseEvent(QMouseEvent::MouseButtonRelease, el.geometry().center(), Qt::MouseButton::LeftButton, Qt::LeftButton, Qt::NoModifier); QCoreApplication::sendEvent(this, &releaseEvent); } And you call it as so: myWebView->click("a[href]"); // will click first link on page myWebView->click("input[type=submit]"); // submits a form THE ONLY PROBLEM IS: if the element is not visible in the window, it is impossible to click. What I mean is if you have to scroll down to see it, you can't click it. I imagine this has to do with the geometry, since the element doesn't show up on the screen it can't do the math to click it right. Any ideas to get around this? Maybe some way to make the window behave like a billion x billion pixels but still look 200x200?

    Read the article

  • Setting WPF RichTextBox width and height according to the size of a monospace font

    - by oxeb
    I am trying to fit a WPF RichTextBox to exactly accommodate a grid of characters in a particular monospace font. I am currently using FormattedText to determine the width and height of my RichTextBox, but the measurements it is providing me with are too small--specifically two characters in width too small. Is there a better way to perform this task? This does not seem to be an appropriate way to determine the size of my control. RichTextBox rtb; rtb = new RichTextBox(); FontFamily fontFamily = new FontFamily("Consolas"); double fontSize = 16; char standardizationCharacter = 'X'; String standardizationLine = ""; for(long loop = 0; loop < columns; loop ++) { standardizationLine += standardizationCharacter; } standardizationLine += Environment.NewLine; String standardizationString = ""; for(long loop = 0; loop < rows; loop ++) { standardizationString += standardizationLine; } Typeface typeface = new Typeface(fontFamily, FontStyles.Normal, FontWeights.Normal, FontStretches.Normal); FormattedText formattedText = new FormattedText(standardizationString, CultureInfo.CurrentCulture, FlowDirection.LeftToRight, typeface, fontSize, Brushes.Black); rtb.Width = formattedText.Width; rtb.Height = formattedText.Height;

    Read the article

  • Reachability sometimes fails, even when we do have an internet connection

    - by stoutyhk
    Hi I've searched but can't see a similar question. I've added a method to check for an internet connection per the Reachability example. It works most of the time, but when installed on the iPhone, it quite often fails even when I do have internet connectivity (only when on 3G/EDGE - WiFi is OK). Basically the code below returns NO. If I switch to another app, say Mail or Safari, and connect, then switch back to the app, then the code says the internet is reachable. Kinda seems like it needs a 'nudge'. Anyone seen this before? Any ideas? Many thanks James + (BOOL) doWeHaveInternetConnection{ BOOL success; // google should always be up right?! const char *host_name = [@"google.com" cStringUsingEncoding:NSASCIIStringEncoding]; SCNetworkReachabilityRef reachability = SCNetworkReachabilityCreateWithName(NULL, host_name); SCNetworkReachabilityFlags flags; success = SCNetworkReachabilityGetFlags(reachability, &flags); BOOL isAvailable = success && (flags & kSCNetworkFlagsReachable) && !(flags & kSCNetworkFlagsConnectionRequired); if (isAvailable) { NSLog(@"Google is reachable: %d", flags); }else{ NSLog(@"Google is unreachable"); } return isAvailable; }

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • An error has occurred opening extern DTD (w3.org, xhtml1-transitional.dtd). 503 Server Unavailable

    - by Cheeso
    I'm trying to do xpath queries over an xhtml document. The document looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> .... </head> <body> ... </body> </html> Because the document includes various char entities (&nbsp; and so on), I need to use the DTD, in order to load it with an XmlReader. So my code looks like this: var reader = XmlReader.Create(sr, new XmlReaderSettings { ProhibitDtd = false }); But when I run this, it returns An error has occurred while opening external DTD 'http://www.w3.org/TR/xhtml1-transitional.dtd': The remote server returned an error: (503) Server Unavailable. Now, I know why I am getting the 503 error. W3C explained it very clearly. But I still want to validate the document. How can I validate with the DTD, and get the entity definitions, without hitting the w3.org website? related: - java.io.IOException: Server returned HTTP response code: 503

    Read the article

  • Problem with WCF Streaming

    - by H4mm3rHead
    Hi, I was looking at this thread: http://stackoverflow.com/questions/1935040/how-to-handle-large-file-uploads-via-wcf I need to have a web service hosted at my provider where i need to upload and download files to. We are talking videos from 1Mb to 100Mb hence the streaming approach. I cant get it to work, i declared an Interface: [ServiceContract] public interface IFileTransferService { [OperationContract] void UploadFile(Stream stream); } and all is fine, i implement it like this: public string FileName = "test"; public void UploadFile(Stream stream) { try { FileStream outStream = File.Open(FileName, FileMode.Create, FileAccess.Write); const int bufferLength = 4096; byte[] buffer = new byte[bufferLength]; int count = 0; while((count = stream.Read(buffer, 0, bufferLength)) > 0) { //progress outStream.Write(buffer, 0, count); } outStream.Close(); stream.Close(); //saved } catch(Exception ex) { throw new Exception("error: "+ex.Message); } } Still no problem, its published to my webserver out on the interweb. So far so good. Now i make a reference to it and will pass it a FileStream, but the argument is now a byte[] - why is that and how do i get it the proper way for streaming? Edit My binding look like this: <bindings> <basicHttpBinding> <binding name="StreamingFileTransferServicesBinding" transferMode="StreamedRequest" maxBufferSize="65536" maxReceivedMessageSize="204003200" /> </basicHttpBinding> </bindings> I can consume it without problems, and get no errors - other than my input parameter has changed from a stream to a byte[]

    Read the article

  • Is there any better IDOMImplementation other than MSXML?

    - by Chau Chee Yang
    There are 3 IDOMImplementation available in Delphi: MSXML Xerces XML ADOM XML v4 MSXML is the default IDOMImplementation. My test is count the time need to load a 10MB xml file. I use a Delphi unit generated from a XSD using XML data binding to load the xml file. This unit has 3 common function: function Getmenubar(Doc: IXMLDocument): IXMLMenubarType; function Loadmenubar(const FileName: WideString): IXMLMenubarType; function Newmenubar: IXMLMenubarType; I learn from the web that some comment that MSXML's overhead is high that it doesn't perform if compare to other XML parser. However, my study shows that MSXML is the best among others. Xerces XML 2nd and ADOM XML v4 the worst: MSXML - 0.6410 seconds Xerces XML - 2.4220 seconds ADOM XML v4 - 67.50 seconds I also come across with OmniXML that claim to have much better performance compare to MSXML but I never success using it with the unit generated by XML data binding. Is there any other vendor that implement IDOMImplementation of Delphi that work much better than MSXML? I am using Delphi 2010 and Windows 7.

    Read the article

  • Struts2 ParametersInterceptor problem with oauth_token

    - by Tahir Akram
    I am developing an application in Struts2 with Twitter4J at GAE/J. I am getting following exception in the GAE log. Unable to understand whats wrong with it. com.opensymphony.xwork2.interceptor.ParametersInterceptor setParameters: ParametersInterceptor - [setParameters]: Unexpected Exception caught setting 'oauth_token' on 'class com.action.Home: Error setting expression 'oauth_token' with value '[Ljava.lang.String;@146ac5a' Following is my struts.xml <!DOCTYPE struts PUBLIC "-//Apache Software Foundation//DTD Struts Configuration 2.0//EN" "http://struts.apache.org/dtds/struts-2.0.dtd" <constant name="struts.enable.DynamicMethodInvocation" value="false" /> <constant name="struts.devMode" value="false" /> <package name="hello" extends="struts-default" > <action name="Home" class="com.action.Home"> <result name="SUCCESS">/home.jsp</result> <result name="ERROR">/message.jsp</result> </action> </package> Home.java code Twitter twitter = new Twitter(); HttpSession session = request.getSession(); twitter.setOAuthConsumer(FFConstants.CONSUMER_KEY, FFConstants.CONSUMER_SECRET); AccessToken accessToken = twitter.getOAuthAccessToken((String)session.getAttribute("token"), (String)session.getAttribute("tokenSecret")); twitter.setOAuthAccessToken(accessToken); User user = twitter.verifyCredentials(); It will be great if some one give me pointer on it. Thanks.

    Read the article

  • Unmanaged DLL in C# Web Service

    - by Telis
    Hi Guys, please help µe as I am new into accessing an unmanaged DLL from C#.. I have a large unmanaged DLL in C++ and I am trying to access the DLL's classes and functions from a C# Web Service. I have seen many examples how to use DLLImport, but for some reason I am stuck with my very first wrapper method spending many hours with no luck.. What should I do to return an object in my 'Marshaled' [DllImport..] function? I would like to do something like that: [DllImport("unmanaged.dll")] public static extern MyClass MyFunction(); Here is the definition of my C++ class and the function that I want to access: class __declspec(dllexport) TPDate { public: TPDate(); TPDate(const TPDate& rhs); ... //today's date. static TPDate AsOfDate(void); ... } In my Web service I have declared the following StructLayout: [StructLayout(LayoutKind.Sequential)] public class TPDate { public TPDate(TPDate d) { _tpDate = d; } public TPDate _tpDate; } and here's where I think that I'm not doing something right: class WrapperTPDate { [DllImport("TPTools.dll", ExactSpelling=false, EntryPoint = "?AsOfDate@TPDate@@SA?AV1@XZ", CallingConvention = CallingConvention.StdCall)] [return: MarshalAs(UnmanagedType.Struct)] **public static extern TPDate AsOfDate();**// HERE THERE IS PROBLEM }; I am calling the wrapper as follows from my WebMethod: [WebMethod] public void ConstructModel() { TPDate date1 = WrapperTPDate.AsOfDate();// Here I get exception TPDate date = new TPDate(date1); } The exception i am getting is: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Runtime.InteropServices.MarshalDirectiveException: Cannot marshal 'return value': Invalid managed/unmanaged type combination (this type must be paired with LPStruct or Interface). If I change it to LPSTRUCT, I am getting another exception: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt Could you please tell me where I'm doing wrong here Thanks

    Read the article

  • Trying to parse OpenCV YAML ouput with yaml-cpp

    - by Kenn Sebesta
    I've got a series of OpenCv generated YAML files and would like to parse them with yaml-cpp I'm doing okay on simple stuff, but the matrix representation is proving difficult. # Center of table tableCenter: !!opencv-matrix rows: 1 cols: 2 dt: f data: [ 240, 240] This should map into the vector 240 240 with type float. My code looks like: #include "yaml.h" #include <fstream> #include <string> struct Matrix { int x; }; void operator >> (const YAML::Node& node, Matrix& matrix) { unsigned rows; node["rows"] >> rows; } int main() { std::ifstream fin("monsters.yaml"); YAML::Parser parser(fin); YAML::Node doc; Matrix m; doc["tableCenter"] >> m; return 0; } But I get terminate called after throwing an instance of 'YAML::BadDereference' what(): yaml-cpp: error at line 0, column 0: bad dereference Abort trap I searched around for some documentation for yaml-cpp, but there doesn't seem to be any, aside from a short introductory example on parsing and emitting. Unfortunately, neither of these two help in this particular circumstance. As I understand, the !! indicate that this is a user-defined type, but I don't see with yaml-cpp how to parse that.

    Read the article

  • JNI problem when calling a native library that loads another native library

    - by TheEnemyOfQuality
    I've got a bit of an odd problem. I have a project in C++ that's basically a wrapper for a third party DLL like this: MyLibrary --loads DLL_A ----loads DLL_B I load DLL_A with LoadLibrary(), wrap several of its functions and generate my own DLL. I've tested this in a C++ project and a C# project. Both do everything they're supposed to do: load DLL_A, make a couple of function calls, and indirectly load DLL_B. The problem is when I build a DLL for java and make the calls through JNI. Everything runs like it should (no java.lang.UnsatisfiedLinkError), but when it comes time for DLL_A to load DLL_B it doesn't work. From debugging, the loading of DLL_B happens on a function call in DLL_A that takes a callback. When called from Java, this function call seems to fail (the function pointer is fine and the actual call goes off without a hitch), and I get an odd pop-up window saying DLL_B failed to load, and my program is left waiting for a callback that never happens. I can explicitly load DLL_B just fine (both from Java and from C++) and I've checked every possible path, path variable, and tried placing the dlls everywhere to see if it could be looking somewhere funny. I'm pretty sure it's not a path problem. Ultimately I don't know how DLL_A is loading DLL_B and I can't figure out why everything works fine in C++ and C#, but not in Java. I'm absolutely flummoxed. It could still be something specific to my setup (although I've looked as hard as I can look), but I'm throwing this scenario out there to see if anyone has run into a similar problem. -Dave

    Read the article

  • How to determine the size (in bytes) of a file downloading using NSURLConnection?

    - by RexOnRoids
    I need to know the size of the file I am downloading (in bytes) into my app using NSURLConnection (GET). Here is my bytes recieved code below if it helps. What I need to know is how to get the filesize in bytes so that I can use it to show a UIProgressView. - (void)connection:(NSURLConnection *)theConnection didReceiveData:(NSData *)data // A delegate method called by the NSURLConnection as data arrives. We just // write the data to the file. { #pragma unused(theConnection) NSInteger dataLength; const uint8_t * dataBytes; NSInteger bytesWritten; NSInteger bytesWrittenSoFar; assert(theConnection == self.connection); dataLength = [data length]; dataBytes = [data bytes]; bytesWrittenSoFar = 0; do { bytesWritten = [self.fileStream write:&dataBytes[bytesWrittenSoFar] maxLength:dataLength - bytesWrittenSoFar]; assert(bytesWritten != 0); if (bytesWritten == -1) { [self _stopReceiveWithStatus:@"File write error"]; break; } else { bytesWrittenSoFar += bytesWritten; } while (bytesWrittenSoFar != dataLength); }

    Read the article

  • MySQL Unique hash insertion

    - by Jesse
    So, imagine a mysql table with a few simple columns, an auto increment, and a hash (varchar, UNIQUE). Is it possible to give mysql a query that will add a column, and generate a unique hash without multiple queries? Currently, the only way I can think of to achieve this is with a while, which I worry would become more and more processor intensive the more entries were in the db. Here's some pseudo-php, obviously untested, but gets the general idea across: while(!query("INSERT INTO table (hash) VALUES (".generate_hash().");")){ //found conflict, try again. } In the above example, the hash column would be UNIQUE, and so the query would fail. The problem is, say there's 500,000 entries in the db and I'm working off of a base36 hash generator, with 4 characters. The likelyhood of a conflict would be almost 1 in 3, and I definitely can't be running 160,000 queries. In fact, any more than 5 I would consider unacceptable. So, can I do this with pure SQL? I would need to generate a base62, 6 char string (like: "j8Du7X", chars a-z, A-Z, and 0-9), and either update the last_insert_id with it, or even better, generate it during the insert. I can handle basic CRUD with MySQL, but even JOINs are a little outside of my MySQL comfort zone, so excuse my ignorance if this is cake. Any ideas? I'd prefer to use either pure MySQL or PHP & MySQL, but hell, if another language can get this done cleanly, I'd build a script and AJAX it too. Thanks!

    Read the article

  • ELF: linking: Why do I get undefined references in .so files

    - by ki.lya.online.fr
    Hi, I'm trying to build a program against wxWidgets, and I get a linker error. I'd like to really understand what it means. The error is: /usr/lib/libwx_baseu-2.8.so: undefined reference to `std::ctype<char>::_M_widen_init() const@GLIBCXX_3.4.11' What I don't understand is why the error is at libwx_baseu-2.8.so. I thought that .so files had all its symbols resolved, contrary to .o files that still need linking. When I ldd the .so, I get can resolve all its linked libraries, so there is no problem there: $ ldd /usr/lib/libwx_baseu-2.8.so linux-gate.so.1 => (0x00476000) libz.so.1 => /lib/libz.so.1 (0x00d9c000) libdl.so.2 => /lib/libdl.so.2 (0x002a8000) libm.so.6 => /lib/libm.so.6 (0x00759000) libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x002ad000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x0068d000) libpthread.so.0 => /lib/libpthread.so.0 (0x006f0000) libc.so.6 => /lib/libc.so.6 (0x00477000) /lib/ld-linux.so.2 (0x007f6000) Does it means that the .so file was not compiled correctly (in that case, it's a bug in my distribution package) or does it means that there are missing libraries on the linker command line for my particular program? Additionally, do you know how I can get a list on undefined symbols in an ELF file. I tried readelf -s but I can't find the missing symbol. Thank you. Mildred

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >