Search Results

Search found 9471 results on 379 pages for 'technology tid bits'.

Page 300/379 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • C++ class is not being included properly.

    - by ravloony
    Hello all, I have a problem which is either something I have completely failed to understand, or very strange. It's probably the first one, but I have spent the whole afternoon googling with no success, so here goes... I have a class called Schedule, which has as a member a vector of Room. However, when I compile using cmake, or even by hand, I get the following: In file included from schedule.cpp:1: schedule.h:13: error: ‘Room’ was not declared in this scope schedule.h:13: error: template argument 1 is invalid schedule.h:13: error: template argument 2 is invalid schedule.cpp: In constructor ‘Schedule::Schedule(int, int, int)’: schedule.cpp:12: error: ‘Room’ was not declared in this scope schedule.cpp:12: error: expected ‘;’ before ‘r’ schedule.cpp:13: error: request for member ‘push_back’ in ‘((Schedule*)this)->Schedule::_sched’, which is of non-class type ‘int’ schedule.cpp:13: error: ‘r’ was not declared in this scope Here are the relevant bits of code: #include <vector> #include "room.h" class Schedule { private: std::vector<Room> _sched; //line 13 int _ndays; int _nrooms; int _ntslots; public: Schedule(); ~Schedule(); Schedule(int nrooms, int ndays, int ntslots); }; Schedule::Schedule(int nrooms, int ndays, int ntslots):_ndays(ndays), _nrooms(nrooms),_ntslots(ntslots) { for (int i=0; i<nrooms;i++) { Room r(ndays,ntslots); _sched.push_back(r); } } In theory, g++ should compile a class before the one that includes it. There are no circular dependencies here, it's all straightforward stuff. I am completely stumped on this one, which is what leads me to believe that I must be missing something. :-D

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Identity.Name is disposed in a IIS7 Asp.NET MVC application Thread

    - by vIceBerg
    I have made the smallest demo project to illustrate my problem. You can download the sources Here Visual Studio 2008, .NET 3.5, IIS7, Windows 7 Ultimate 32 bits. The IIS Website is configured ONLY for Windows Authentication in an Integreated pipeline app pool (DefaultAppPool). Here's the problem. I have an Asp.NET MVC 2 application. In an action, I start a thread. The View returns. The thread is doing it's job... but it needs to access Thread.CurrentPrincipal.Identity.Name BANG The worker process of IIS7 stops. I have a window that says: "Visual Studio Just-In-Time Debugger An unhandled exception ('System.Object.DisposedException') occured in w3wp.exe [5524]" I checked with the debugger and the Thread.CurrentPrincipal.Identity is valid, but the Name property is disposed. If I put a long wait in the action before it returns the view, then the Thread can do it's job and the Identity.Name is not disposed. So I think the Name gets disposed when the view is returned. For the sake of the discussion, here's the code that the thread runs (but you can also download the demo project. The link is on top of this post): private void Run() { const int SECTOWAIT = 3; //wait SECTOWAIT seconds long end = DateTime.Now.Ticks + (TimeSpan.TicksPerSecond * SECTOWAIT); while (DateTime.Now.Ticks <= end) continue; //Check the currentprincipal. BANG!!!!!!!!!!!!! var userName = Thread.CurrentPrincipal.Identity.Name; } Here's the code that starts the thread public void Start() { Thread thread = new Thread(new ParameterizedThreadStart(ThreadProc)); thread.SetApartmentState(ApartmentState.MTA); thread.Name = "TestThread"; thread.Start(this); } static void ThreadProc(object o) { try { Builder builder = (Builder)o; builder.Run(); } catch (Exception ex) { throw; } } So... what am i doing wrong? Thanks

    Read the article

  • Bitwise operators and converting an int to 2 bytes and back again.

    - by aKiwi
    first time user, Hi guys! So hopefully someone can help.. My background is php so entering the word of lowend stuff like, char is bytes, which are bits.. which is binary values.. etc is taking some time to get the hang of ;) What im trying to do here is sent some values from an Ardunio board to openFrameWorks (both are c++). What this script currently does (and works well for one sensor i might add) when asked for the data to be sent is.. int value_01 = analogRead(0); // which outputs between 0-1024 unsigned char val1; unsigned char val2; //some Complicated bitshift operation val1 = value_01 &0xFF; val2 = (value_01 >> 8) &0xFF; //send both bytes Serial.print(val1, BYTE); Serial.print(val2, BYTE); Apparently this is the most reliable way of getting the data across.. So now that it is send via serial port, the bytes are added to a char string and converted back by.. int num = ( (unsigned char)bytesReadString[1] << 8 | (unsigned char)bytesReadString[0] ); So to recap, im trying to get 4 sensors worth of data (which im assuming will be 8 of those serialprints?) and to have int num_01 - num_04... at the end of it all. Im assuming this (as with most things) might be quite easy for someone with experience in these concepts.. Any help would be greatly appreciated. Thanks

    Read the article

  • Python: Created nested dictionary from list of paths

    - by sberry2A
    I have a list of tuples the looks similar to this (simplified here, there are over 14,000 of these tuples with more complicated paths than Obj.part) [ (Obj1.part1, {<SPEC>}), (Obj1.partN, {<SPEC>}), (ObjK.partN, {<SPEC>}) ] Where Obj goes from 1 - 1000, part from 0 - 2000. These "keys" all have a dictionary of specs associated with them which act as a lookup reference for inspecting another binary file. The specs dict contains information such as the bit offset, bit size, and C type of the data pointed to by the path ObjK.partN. For example: Obj4.part500 might have this spec, {'size':32, 'offset':128, 'type':'int'} which would let me know that to access Obj4.part500 in the binary file I must unpack 32 bits from offset 128. So, now I want to take my list of strings and create a nested dictionary which in the simplified case will look like this data = { 'Obj1' : {'part1':{spec}, 'partN':{spec} }, 'ObjK' : {'part1':{spec}, 'partN':{spec} } } To do this I am currently doing two things, 1. I am using a dotdict class to be able to use dot notation for dictionary get / set. That class looks like this: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ The method for creating the nested "dotdict"s looks like this: def addPath(self, spec, parts, base): if len(parts) > 1: item = base.setdefault(parts[0], dotdict()) self.addPath(spec, parts[1:], item) else: item = base.setdefault(parts[0], spec) return base Then I just do something like: for path, spec in paths: self.lookup = dotdict() self.addPath(spec, path.split("."), self.lookup) So, in the end self.lookup.Obj4.part500 points to the spec. Is there a better (more pythonic) way to do this?

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • Javascript CS-PRNG - 64-bit random

    - by Jack
    Hi, I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem. My Method: var randNum = SHA256( randBigInt(128, 0) ) % 2^64; Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem. randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1. Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number? An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output. Thanks (hope that makes sense)

    Read the article

  • Would making plain int 64-bit break a lot of reasonable code?

    - by R..
    Until recently, I'd considered the decision by most systems implementors/vendors to keep plain int 32-bit even on 64-bit machines a sort of expedient wart. With modern C99 fixed-size types (int32_t and uint32_t, etc.) the need for there to be a standard integer type of each size 8, 16, 32, and 64 mostly disappears, and it seems like int could just as well be made 64-bit. However, the biggest real consequence of the size of plain int in C comes from the fact that C essentially does not have arithmetic on smaller-than-int types. In particular, if int is larger than 32-bit, the result of any arithmetic on uint32_t values has type signed int, which is rather unsettling. Is this a good reason to keep int permanently fixed at 32-bit on real-world implementations? I'm leaning towards saying yes. It seems to me like there could be a huge class of uses of uint32_t which break when int is larger than 32 bits. Even applying the unary minus or bitwise complement operator becomes dangerous unless you cast back to uint32_t. Of course the same issues apply to uint16_t and uint8_t on current implementations, but everyone seems to be aware of and used to treating them as "smaller-than-int" types.

    Read the article

  • Delphi fast large bitmap creation (without clearing)

    - by Ritsaert Hornstra
    When using the TBitmap wrapper for a GDI bitmap from the unit Graphics I noticed it will always clear out the bitmap (using a PatBlt call) when setting up a bitmap with SetSize( w, h ). When I copy in the bits later on (see routine below) it seems ScanLine is the fastest possibility and not SetDIBits. function ToBitmap: TBitmap; var i, N, x: Integer; S, D: PAnsiChar; begin Result := TBitmap.Create(); Result.PixelFormat := pf32bit; Result.SetSize( width, height ); S := Src; D := Result.ScanLine[ 0 ]; x := Integer( Result.ScanLine[ 1 ] ) - Integer( D ); N := width * sizeof( longword ); for i := 0 to height - 1 do begin Move( S^, D^, N ); Inc( S, N ); Inc( D, x ); end; end; The bitmaps I need to work with are quite large (150MB of RGB memory). With these iomages it takes 150ms to simply create an empty bitmap and a further 140ms to overwrite it's contents. Is there a way of initializing a TBitmap with the correct size WITHOUT initializing the pixels itself and leaving the memory of the pixels uninitialized (eg dirty)? Or is there another way to do such a thing. I know we could work on the pixels in place but this still leaves the 150ms of unnessesary initializtion of the pixels.

    Read the article

  • GPL/LGPL-licensed images + iPhone development

    - by cubic1271
    Since the majority of legal links / READMEs I've found when browsing icon sets refer me to the general GPL / LGPL (as opposed to a specialized version of some kind) when I'm looking at license restrictions, I'm having a terrible time trying to figure out what would constitute source code, linking, etc. when it comes to images and / or icons. One specific example: under section 5 of the GPL, modifications must carry notices in the source code. . . how do I do that with an image? I guess I could try to find a few unused bits and encode my modifications in there (steganography, anyone?), but somehow that doesn't seem like what the license is shooting for. There are also other sections in there where I have no idea how to begin to comply with. Thus, I'm really confused. What exactly are the implications of using GPL and / or LGPL licensed images in something that isn't itself GPL'd? Specifically, I'd like to know what using GPL icons in an iPhone application might mean from a legal point of view. It feels like I'm missing something obvious here; any enlightenment / references would be appreciated!

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • getSharedPreferences not working for me with concerns to ListPreferences and Integers

    - by ideagent
    I'm stuck at a point where I'm trying to get my project to read a preference value (from a ListPreference listing) and then use that value in a basic mathematical subtraction instance. The problem is that the "seek" preference is not being seen by my Java code, and yet the default value is (I've tried the default value with 3000 and now 0). Am i missing something, is there a bug here, known or unknown? Java code chunk where the issues manifests itself: public static final String PREF_FILE_NAME = "preferences"; seekback.setOnClickListener(new Button.OnClickListener() { @Override public void onClick(View view) { try { SharedPreferences preferences = getSharedPreferences(PREF_FILE_NAME, MODE_PRIVATE); Integer storedPreference = preferences.getInt("seek", 0); (mediaPlayer.getCurrentPosition()-storedPreference); } catch (Exception e) { e.printStackTrace(); } } }); Here are some other code bits for my project: From preferences file: <ListPreference android:entries="@array/seconds" android:entryValues="@array/seconds_values" android:summary="sets the seek interval for the seekback and seekforward buttons" android:title="Seek Interval" android:defaultValue="5000" android:key="@string/seek" From strings file: seek From an array file: Five seconds Fifteen seconds Thirty seconds Sixty seconds 5000 15000 30000 60000 let me know if you need to see more code to figure this one out Thanks in advance for any help that can be offered. I've worked over this issue now for a few hours and I'm burnt, a second pair of eyes on it would be very much appreciated. Arg, not sure how to get the code and plain text to format nicely here, even tried the options, like Code Sample, no luck AndroidCoder

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • SELECT * FROM <table> BETWEEN <value typed in a JTextField> AND <idem>

    - by Rodrigo Sieja Bertin
    In this part of the program, the JInternalFrame file FrmMovimento, I made a button called Visualizar ("View"). Based on what we type on the text fields, it must show the interval the user defined. There are these JTextFields: Code from: _ To: _ Asset: _ And these JFormattedTextFields: Date from: _ To: _ There are registers appearing already in the JDesktopPane from JInternalFrame FrmListarMov, if I use another SELECT statement selecting all registers, for example. But not if I type as I did in the title: public List<MovimentoVO> Lista() throws ErroException, InformacaoException{ List<MovimentoVO> listaMovimento = new ArrayList<> (); try { MySQLDAO.getInstancia().setAutoCommit(false); try (PreparedStatement stmt = MySQLDAO.getInstancia().prepareStatement( "SELECT * FROM Cadastro2 WHERE Codigo BETWEEN "+ txtCodDeMov +" AND "+ txtCodAteMov +";") { ResultSet registro = stmt.executeQuery(); while(registro.next()){ MovimentoVO Movimento = new MovimentoVO(); Movimento.setCodDeMov(registro.getInt(1)); Movimento.setCodAteMov(registro.getInt(2)); Movimento.setAtivoMov(registro.getString(3)); Movimento.setDataDeMov(registro.getString(4)); Movimento.setDataAteMov(registro.getString(5)); listaMovimento.add(Movimento); } } } catch (SQLException ex) { throw new ErroException(ex.getMessage()); } catch (InformacaoException ex) { throw ex; } return listaMovimento; } In the SELECT line, txtCodDeMov is how I named the JTextField of "Code from" and txtCodAtemov is how I named the JTextField of the first "To". Oh, I'm using NetBeans 7.1.2 (Build 201204101705) and MySQL Ver 14.14 Distrib 5.1.63 in a Linux Mint 12 64-bits.

    Read the article

  • If we make a number every millisecond, how much data would we have in a day?

    - by Roger Travis
    I'm a bit confused here... I'm being offered to get into a project, where would be an array of certain sensors, that would give off reading every millisecond ( yes, 1000 reading in a second ). Reading would be a 3 or 4 digit number, for example like 818 or 1529. This reading need to be stored in a database on a server and accessed remotely. I never worked with such big amounts of data, what do you think, how much in terms of MBs reading from one sensor for a day would be?... 4(digits)x1000x60x60x24 ... = 345600000 bits ... right ? about 42 MB per day... doesn't seem too bad, right? therefor a DB of, say, 1 GB, would hold 23 days of info from 1 sensor, correct? I understand that MySQL & PHP probably would not be able to handle it... what would you suggest, maybe some aps? azure? oracle? ... Thansk!

    Read the article

  • WPF - Transparency - Stream Desktop Content

    - by Niels Willems
    Greetings I'm in the process of making a Scoreboard for a game (Starcraft II). This scoreboard is being made as a WPF Application with a C# code-behind. I already have a version which works for 90% in WinForms but I lacked the support to easily make it look a lot nicer which are available in WPF. The point of this application will be to form a kind of overlay on top of a running game. This game is in Fulscreen(Windowed Mode) so when in WinForms I coded it so that it should always be on top. It would do so and that was no problem. Since the main look of the app in WPF is based on an image with a transparent background I have set most Background values to Transparent. However when I do this the entire application does not get registered by streaming software. For example it just shows my Desktop or the game I'm playing but not my application even though it IS there. I can see it with my own eyes but the audience on the stream cannot. Does anyone have any experience with this matter because it's really doing my head in. My entire application will be useless if it is not visible on streams. If I have to put the background on a color rather than transparent the UI will be completely demolished as well in terms of looks. I'm basically trying to make a game-overlay in C# & WPF. I have read you can do this on different ways as well but I have little to no knowledge of C++ nor do I know anything about DirectX Thank you for your time reading and your possible insights. Edit: The best solution would be an overlay similar to that one of Steam/Xfire/Dolby Axon. Edit 2: I've had no luck with all the suggestions so I basically made the transparent bits of my image non transparent and let the user decide which one to use depending on what streaming software they would be using.

    Read the article

  • anti-if campaign

    - by Andrew Siemer
    I recently ran against a very interesting site that expresses a very interesting idea - the anti-if campaign. You can see this here at www.antiifcampaign.com. I have to agree that complex nested IF statements are an absolute pain in the rear. I am currently on a project that up until very recently had some crazy nested IFs that scrolled to the right for quite a ways. We cured our issues in two ways - we used Windows Workflow Foundation to address routing (or workflow) concerns. And we are in the process of implementing all of our business rules utilizing ILOG Rules for .NET (recently purchased by IBM!!). This for the most part has cured our nested IF pains...but I find myself wondering how many people cure their pains in the manner that the good folks at the AntiIfCampaign suggest (see an example here) by creating numerous amounts of abstract classes to represent a given scenario that was originally covered by the nested IF. I wonder if another way to address the removal of this complexity might also be in using an IoC container such as StructureMap to move in and out of different bits of functionality. Either way... Question: Given a scenario where I have a nested complex IF or SWITCH statement that is used to evaluate a given type of thing (say evaluating an Enum) to determine how I want to handle the processing of that thing by enum type - what are some ways to do the same form of processing without using the IF or SWITCH hierarchical structure? public enum WidgetTypes { Type1, Type2, Type3, Type4 } ... WidgetTypes _myType = WidgetTypes.Type1; ... switch(_myType) { case WidgetTypes.Type1: //do something break; case WidgetTypes.Type2: //do something break; //etc... }

    Read the article

  • Auditing front end performance on web application

    - by user1018494
    I am currently trying to performance tune the UI of a company web application. The application is only ever going to be accessed by staff, so the speed of the connection between the server and client will always be considerably more than if it was on the internet. I have been using performance auditing tools such as Y Slow! and Google Chrome's profiling tool to try and highlight areas that are worth targeting for investigation. However, these tools are written with the internet in mind. For example, the current suggestions from a Google Chrome audit of the application suggests is as follows: Network Utilization Combine external CSS (Red warning) Combine external JavaScript (Red warning) Enable gzip compression (Red warning) Leverage browser caching (Red warning) Leverage proxy caching (Amber warning) Minimise cookie size (Amber warning) Parallelize downloads across hostnames (Amber warning) Serve static content from a cookieless domain (Amber warning) Web Page Performance Remove unused CSS rules (Amber warning) Use normal CSS property names instead of vendor-prefixed ones (Amber warning) Are any of these bits of advice totally redundant given the connection speed and usage pattern? The users will be using the application frequently throughout the day, so it doesn't matter if the initial hit is large (when they first visit the page and build their cache) so long as a minimal amount of work is done on future page views. For example, is it worth the effort of combining all of our CSS and JavaScript files? It may speed up the initial page view, but how much of a difference will it really make on subsequent page views throughout the working day? I've tried searching for this but all I keep coming up with is the standard internet facing performance advice. Any advice on what to focus my performance tweaking efforts on in this scenario, or other auditing tool recommendations, would be much appreciated.

    Read the article

  • GNU C++ how to check when -std=c++0x is in effect?

    - by TerryP
    My system compiler (gcc42) works fine with the TR1 features that I want, but trying to support newer compiler versions other than the systems, trying to accessing TR1 headers an #error demanding the -std=c++0x option because of how it interfaces with library or some hub bub like that. /usr/local/lib/gcc45/include/c++/bits/c++0x_warning.h:31:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options. Having to supply an extra switch is no problem, to support GCC 4.4 and 4.5 under this system (FreeBSD), but obviously it changes the picture! Using my system compiler (g++ 4.2 default dialect): #include <tr1/foo> using std::tr1::foo; Using newer (4.5) versions of the compiler with -std=c++0x: #include <foo> using std::foo; Is there anyway using the pre processor, that I can tell if g++ is running with C++0x features enabled? Something like this is what I'm looking for: #ifdef __CXX0X_MODE__ #endif but I have not found anything in the manual or off the web. At this rate, I'm starting to think that life would just be easier, to use Boost as a dependency, and not worry about a new language standard arriving before TR4... hehe.

    Read the article

  • Dangers when deploying Flash/Flex UI test automation hooks to production?

    - by Merlyn Morgan-Graham
    I am interested in doing automated testing against a Flex based UI. I have found out that my best options for UI automation (due to being C# controllable, good licensing conditions, etc) all seem to require that I compile test hooks into my application. Because of this, I am thinking of recommending that these hooks be compiled into our build. I have found a few places on the net that recommend not deploying bits with this instrumentation enabled, and I'd like to know why. Is it a performance drain, or a security risk? If it is a security risk, can you explain how the attack surface is increased? I am not a Flash or Flex developer, though I have some experience with threat modeling. For reference, here's the tools I'm specifically considering: QTP Selenium-Flex API I am having problems finding all the warnings/suggestions I found last night, but here's an example that I can find: http://www.riatest.com/products/getting-started.html Warning! Automation enabled applications expose all properties of all GUI components. This makes them vulnerable to malicious use. Never make automation enabled application publicly available. Always restrict access to such applications and to RIATest Loader to trusted users only. Related question (how to do conditional compilation to insert/remove those hooks): Conditionally including Flex libraries (SWCs) in mxmlc/compc ant tasks

    Read the article

  • Real Time Sound Captureing J2ME

    - by Abdul jalil
    i am capturing sound in J2me and send these bytes to remote system, i then play these bytes on remote system.five second voice is capture and send to remote system. i get the repeated sound again .i am making a sound messenger please help me where i am doing wrong i am using the follown code . String remoteTimeServerAddress="192.168.137.179"; sc = (SocketConnection) Connector.open("socket://"+remoteTimeServerAddress+":13"); p = Manager.createPlayer("capture://audio?encoding=pcm&rate=11025&bits=16&channels=1"); p.realize(); RecordControl rc = (RecordControl)p.getControl("RecordControl"); ByteArrayOutputStream output = new ByteArrayOutputStream(); OutputStream outstream =sc.openOutputStream(); rc.setRecordStream(output); rc.startRecord(); p.start(); int size=output.size(); int offset=0; while(true) { Thread.currentThread().sleep(5000); rc.commit(); output.flush(); size=output.size(); if(size0) { recordedSoundArray=output.toByteArray(); outstream.write(recordedSoundArray,0,size); } output.reset(); rc.reset(); rc.setRecordStream(output); rc.startRecord(); }

    Read the article

  • Solved: Help with this compile error

    - by Scott
    I just picked up an old project and I'm not sure what the following error could mean. g++ -o BufferedReader.o -c -g -Wall -std=c++0x -I/usr/include/xmms2 -Ijsoncpp/include/json/ -fopenmp -I/usr/include/ImageMagick -I/usr/include/xmms2 -I/usr/include/libvisual-0.4 -D_GNU_SOURCE=1 -D_REENTRANT -I/usr/include/SDL -DQT_CORE_LIB -DQT_GUI_LIB -DQT_SCRIPT_LIB -DQT_SHARED -I/usr/include/QtCore -I/usr/include/QtGui -I/usr/include/QtScript BufferedReader.cpp In file included from BufferedReader.cpp:23: /usr/include/string.h:36:42: error: missing binary operator before token "(" In file included from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/cwchar:47, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/bits/postypes.h:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/iosfwd:42, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/ios:39, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/istream:40, from /usr/lib/gcc/i686-redhat-linux/4.4.3/../../../../include/c++/4.4.3/sstream:39, from BufferedReader.cpp:24: At line 24 of BufferedReader.cpp is #include <string.h>. I've tried it with just <string> but get the same thing. Any clue? Here's the snippet of code from string.h /* Tell the caller that we provide correct C++ prototypes. */ #if defined __cplusplus && __GNUC_PREREQ (4, 4) //line 36 # define __CORRECT_ISO_CPP_STRING_H_PROTO #endif Does that mean __GNUC_PREREQ isn't defined? Edit: Changing -Ijsoncpp/include/json/ to Ijsoncpp/include stopped the errors. I noticed I was including <json/json.h>. I'm about to switch to JsonGlib though, which is the reason I pulled the project up again. So it's all good. :)

    Read the article

  • Symbols (pdb) for native dll are not loaded due to post build step

    - by sean e
    I have a native release dll that is built with symbols. There is a post build step that modifies the dll. The post build step does some compression and probably appends some data. The pdb file is still valid however neither WinDbg nor Visual Studio 2008 will load the symbols for the dll after the post build step. What bits in either the pdb file or the dll do we need to modify to get either WinDbg or Visual Studio to load the symbols when it loads a dump in which our release dll is referenced? Is it filesize that matters? A checksum or hash? A timestamp? Modify the dump? or modify the pdb? modify the dll before it is shipped? (We know the pdb is valid because we are able to use it to manually get symbol names for addresses in dump callstacks that reference the released dll. It's just a total pain in the *ss do it by hand for every address in a callstack in all the threads.)

    Read the article

  • Validating parameters according to a fixed reference

    - by James P.
    The following method is for setting the transfer type of an FTP connection. Basically, I'd like to validate the character input (see comments). Is this going overboard? Is there a more elegant approach? How do you approach parameter validation in general? Any comments are welcome. public void setTransferType(Character typeCharacter, Character optionalSecondCharacter) throws NumberFormatException, IOException { // http://www.nsftools.com/tips/RawFTP.htm#TYPE // Syntax: TYPE type-character [second-type-character] // // Sets the type of file to be transferred. type-character can be any // of: // // * A - ASCII text // * E - EBCDIC text // * I - image (binary data) // * L - local format // // For A and E, the second-type-character specifies how the text should // be interpreted. It can be: // // * N - Non-print (not destined for printing). This is the default if // second-type-character is omitted. // * T - Telnet format control (<CR>, <FF>, etc.) // * C - ASA Carriage Control // // For L, the second-type-character specifies the number of bits per // byte on the local system, and may not be omitted. final Set<Character> acceptedTypeCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'A','E','I','L'} )); final Set<Character> acceptedOptionalSecondCharacters = new HashSet<Character>(Arrays.asList( new Character[] {'N','T','C'} )); if( acceptedTypeCharacters.contains(typeCharacter) ) { if( new Character('A').equals( typeCharacter ) || new Character('E').equals( typeCharacter ) ){ if( acceptedOptionalSecondCharacters.contains(optionalSecondCharacter) ) { executeCommand("TYPE " + typeCharacter + " " + optionalSecondCharacter ); } } else { executeCommand("TYPE " + typeCharacter ); } } }

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >