Search Results

Search found 4690 results on 188 pages for 'ran'.

Page 152/188 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Objective-C: Getting the True Class of Classes in Class Clusters

    - by TechZen
    Recently while trying to answer a questions here, I ran some test code to see how Xcode/gdb reported the class of instances in class clusters. (see below) In the past, I've expected to see something like: PrivateClusterClass:PublicSuperClass:NSObject Such as this (which still returns as expected): NSPathStore2:NSString:NSObject ... for a string created with +[NSString pathWithComponents:]. However, with NSSet and subclass the following code: - (void)applicationDidFinishLaunching:(UIApplication *)application { NSSet *s=[NSSet setWithObject:@"setWithObject"]; NSMutableSet *m=[NSMutableSet setWithCapacity:1]; [m addObject:@"Added String"]; NSMutableSet *n = [[NSMutableSet alloc] initWithCapacity:1]; [self showSuperClasses:s]; [self showSuperClasses:m]; [self showSuperClasses:n]; [self showSuperClasses:@"Steve"]; } - (void) showSuperClasses:(id) anObject{ Class cl = [anObject class]; NSString *classDescription = [cl description]; while ([cl superclass]) { cl = [cl superclass]; classDescription = [classDescription stringByAppendingFormat:@":%@", [cl description]]; } NSLog(@"%@ classes=%@",[anObject class], classDescription); } ... outputs: // NSSet *s NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *m NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject //NSMutableSet *n NSCFSet classes=NSCFSet:NSMutableSet:NSSet:NSObject // NSString @"Steve" NSCFString classes=NSCFString:NSMutableString:NSString:NSObject The debugger shows the same class for all Set instances. I know that in the past the Set class cluster did not return like this. What has changed? (I suspect it is a change in the bridge from Core Foundation.) What class cluster report just a generic class e.g. NSCFSet and which report an actual subclass e.g. NSPathStore2? Most importantly, when debugging how do you determine the actual class of a NSSet cluster instance?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Scapy install issues. Nothing seems to actually be installed?

    - by Chris
    I have an apple computer running Leopard with python 2.6. I downloaded the latest version of scapy and ran "python setup.py install". All went according to plan. Now, when I try to run it in interactive mode by just typing "scapy", it throws a bunch of errors. What gives! Just in case, here is the FULL error message.. INFO: Can't import python gnuplot wrapper . Won't be able to plot. INFO: Can't import PyX. Won't be able to use psdump() or pdfdump(). ERROR: Unable to import pcap module: No module named pcap/No module named pcapy ERROR: Unable to import dnet module: No module named dnet Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 122, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/runpy.py", line 34, in _run_code exec code in run_globals File "/Users/owner1/Downloads/scapy-2.1.0/scapy/__init__.py", line 10, in <module> interact() File "scapy/main.py", line 245, in interact scapy_builtins = __import__("all",globals(),locals(),".").__dict__ File "scapy/all.py", line 25, in <module> from route6 import * File "scapy/route6.py", line 264, in <module> conf.route6 = Route6() File "scapy/route6.py", line 26, in __init__ self.resync() File "scapy/route6.py", line 39, in resync self.routes = read_routes6() File "scapy/arch/unix.py", line 147, in read_routes6 lifaddr = in6_getifaddr() File "scapy/arch/unix.py", line 123, in in6_getifaddr i = dnet.intf() NameError: global name 'dnet' is not defined

    Read the article

  • C++ Exception Handling

    - by user1413793
    So I was writing some code and I noticed that apart from syntactical, type, and other compile-time errors, C++ does not throw any other exceptions. So I decided to test this out with a very trivial program: #include<iostream> int main() { std::count<<5/0<<std::endl; return 1 } When I compiled it using g++, g++ gave me a warning saying I was dividing by 0. But it still compiled the code. Then when I ran it, it printed some really large arbitrary number. When I want to know is, how does C++ deal with exceptions? Integer division by 0 should be a very trivial example of when an exception should be thrown and the program should terminate. Do I have to essentially enclose my entire program in a huge try block and then catch certain exceptions? I know in Python when an exception is thrown, the program will immediately terminate and print out the error. What does C++ do? Are there even runtime exceptions which stop execution and kill the program?

    Read the article

  • Specializing a class template constructor

    - by SilverSun
    I'm messing around with template specialization and I ran into a problem with trying to specialize the constructor based on what policy is used. Here is the code I am trying to get to work. #include <cstdlib> #include <ctime> class DiePolicies { public: class RollOnConstruction { }; class CallMethod { }; }; #include <boost/static_assert.hpp> #include <boost/type_traits/is_same.hpp> template<unsigned sides = 6, typename RollPolicy = DiePolicies::RollOnConstruction> class Die { // policy type check BOOST_STATIC_ASSERT(( boost::is_same<RollPolicy, DiePolicies::RollOnConstruction>::value || boost::is_same<RollPolicy, DiePolicies::CallMethod>::value )); unsigned m_die; unsigned random() { return rand() % sides; } public: Die(); void roll() { m_die = random(); } operator unsigned () { return m_die + 1; } }; template<unsigned sides> Die<sides, DiePolicies::RollOnConstruction>::Die() : m_die(random()) { } template<unsigned sides> Die<sides, DiePolicies::CallMethod>::Die() : m_die(0) { } ...\main.cpp(29): error C3860: template argument list following class template name must list parameters in the order used in template parameter list ...\main.cpp(29): error C2976: 'Die' : too few template arguments ...\main.cpp(31): error C3860: template argument list following class template name must list parameters in the order used in template parameter list Those are the errors I get in Microsoft Visual Studio 2010. I'm thinking either I can't figure out the right syntax for the specialization, or maybe it isn't possible to do it this way.

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Catching / blocking SIGINT during system call

    - by danben
    I've written a web crawler that I'd like to be able to stop via the keyboard. I don't want the program to die when I interrupt it; it needs to flush its data to disk first. I also don't want to catch KeyboardInterruptedException, because the persistent data could be in an inconsistent state. My current solution is to define a signal handler that catches SIGINT and sets a flag; each iteration of the main loop checks this flag before processing the next url. However, I've found that if the system happens to be executing socket.recv() when I send the interrupt, I get this: ^C Interrupted; stopping... // indicates my interrupt handler ran Traceback (most recent call last): File "crawler_test.py", line 154, in <module> main() ... File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/socket.py", line 397, in readline data = recv(1) socket.error: [Errno 4] Interrupted system call and the process exits completely. Why does this happen? Is there a way I can prevent the interrupt from affecting the system call?

    Read the article

  • problems with Console.SetOut in Release Mode?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Update: hamishmcn below started talking about debug/release builds. I removed the native call in the above button1_click method and just replaced it with a standard Console.WriteLine .net call. When I compiled and ran this in debug mode the messages were redirected to the output file. When I switched to release mode however the calls weren't redirected. Console redirection only seems to work in debug mode. How do I get around this?

    Read the article

  • SIGABRT on iPhone when changing xib

    - by Boz
    I've just finished off an app for the iPhone which, until today, ran fine on the iPhone simulator and actual devices. I tried changing the xib which is loaded in the applicationDidFinishLaunching method in my application delegate class - all I did was change the string in initWithNibName. When I launch the app on the simulator, the Default.png image is shown, then the app crashes with an uncaught exception. When running on a device, the Default.png image is shown for about 10 seconds, the UI is never loaded and I get 'GDB: Program received signal: "SIGABRT".' on the Xcode status bar. Debugging shows that applicationDidFinishLaunching is never actually reached before the app crashes. Setting the starting xib back to the original solves the issue, but now I've made a change and saved it in the Interface Builder and the app shows the same issues as above - I've made no code changes at all. Is this a memory issue, or a known issue of a common mistake? NOTE: I've made no code changes whatsoever, and the only changes I've made to the xib are cosmetic, the IBOutlets are all intact.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Drag and Drop in Silverlight with F# and Asynchronous Workflows

    - by knotig
    Hello everyone! I'm trying to implement drag and drop in Silverlight using F# and asynchronous workflows. I'm simply trying to drag around a rectangle on the canvas, using two loops for the the two states (waiting and dragging), an idea I got from Tomas Petricek's book "Real-world Functional Programming", but I ran into a problem: Unlike WPF or WinForms, Silverlight's MouseEventArgs do not carry information about the button state, so I can't return from the drag-loop by checking if the left mouse button is no longer pressed. I only managed to solve this by introducing a mutable flag. Would anyone have a solution for this, that does not involve mutable state? Here's the relevant code part (please excuse the sloppy dragging code, which snaps the rectangle to the mouse pointer): type MainPage() as this = inherit UserControl() do Application.LoadComponent(this, new System.Uri("/SilverlightApplication1;component/Page.xaml", System.UriKind.Relative)) let layoutRoot : Canvas = downcast this.FindName("LayoutRoot") let rectangle1 : Rectangle = downcast this.FindName("Rectangle1") let mutable isDragged = false do rectangle1.MouseLeftButtonUp.Add(fun _ -> isDragged <- false) let rec drag() = async { let! args = layoutRoot.MouseMove |> Async.AwaitEvent if (isDragged) then Canvas.SetLeft(rectangle1, args.GetPosition(layoutRoot).X) Canvas.SetTop(rectangle1, args.GetPosition(layoutRoot).Y) return! drag() else return() } let wait() = async { while true do let! args = Async.AwaitEvent rectangle1.MouseLeftButtonDown isDragged <- true do! drag() } Async.StartImmediate(wait()) () Thank you very much for your time!

    Read the article

  • SQL Server performance issue.

    - by Jit
    Hi Friends, I have been trying to analyze performance issue with SQL Server 2005. We have 30 jobs, one for each databases (30 databases, one per each client). The jobs run at early morning at an interval of 5 minutes. When I run the job individually for testing, for most of the databases it finishes in 7 to 9 minutes. But when these jobs run at early morning, I see few jobs taking 2 to 3 hours to finish and the same takes few minutes as mentioned above if ran independently. We dont have any other job scheduled during that time, other than these 30 jobs. If we restart the server then for 2 or so days all the jobs finishes in few minutes, but over the period of time (from 3rd day suddenly), few jobs start taking hours to finish. What could be the possible reason of performance degradation over the period of time? I verified all the SPs and we uses temp tables and I made sure none of the temp table is left without dropping at the end of SP. Let me know what are the possible reasons for such behavior. Appreciate your time and help. Thanks

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Errors with redefinitions after upgrade to XCode 3.2.3

    - by CA Bearsfan
    I recently upgraded to Snow Leopard and Xcode 3.2.5 so I could test on my iPod Touch and iPhone and ran into some problems with the project I was working on. First it couldn't find a Base SDK, then my old frameworks weren't hooking up correctly. Finally after setting the Project Format to Xcode 3.1 compatible (3.2 also worked) and the Base SDK for all configurations to iOS 4.2, then setting my iOS deployment target to iOS 3.0 I was able to get the system to find a Base SDK and attempt a build. That's when the frameworks didn't want to cooperate. 4/6 I'm using displayed in red, so I re routed the path to the iPhone simulator 4.2 platform which worked perfectly. I was able to build my project, no errors or warnings and my app worked fine. I went to work last night thinking I had fixed the problem. This morning I fired up the laptop and went to build my code base and now have 1142 errors all of which have to do with code I haven't written deemed as being redefined. Suggestions? The following is just a small sample of the error list (obviously don't need to see all 1142) //Frameworks/Foundation.framework/Headers/NSZone.h:48: error: redefinition of 'NSMakeCollectable' /Frameworks/Foundation.framework/Headers/NSObject.h:65: error: duplicate interface declaration for class 'NSObject' /Frameworks/Foundation.framework/Headers/NSObject.h:67: error: redefinition of 'struct NSObject'

    Read the article

  • SQL Server - stored procedure suddenly become slow

    - by Barguast
    I have written a stored procedure that, yesterday, typically completed in under a second. Today, it takes about 18 seconds. I ran into the problem yesterday as well, and it seemed to be solved by DROPing and re-CREATEing the stored procedure. Today, that trick doesn't appear to be working. :( Interestingly, if I copy the body of the stored procedure and execute it as a straightforward query it completes quickly. It seems to be the fact that it's a stored procedure that's slowing it down...! Does anyone know what the problem might be? I've searched for answers, but often they recommend running it through Query Analyser, but I don't have have it - I'm using SQL Server 2008 Express for now. The stored procedure is as follows; ALTER PROCEDURE [dbo].[spGetPOIs] @lat1 float, @lon1 float, @lat2 float, @lon2 float, @minLOD tinyint, @maxLOD tinyint, @exact bit AS BEGIN -- Create the query rectangle as a polygon DECLARE @bounds geography; SET @bounds = dbo.fnGetRectangleGeographyFromLatLons(@lat1, @lon1, @lat2, @lon2); -- Perform the selection if (@exact = 0) BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.Filter([Location]) = 1) END ELSE BEGIN SELECT [ID], [Name], [Type], [Data], [MinLOD], [MaxLOD], [Location].[Lat] AS [Latitude], [Location].[Long] AS [Longitude], [SourceID] FROM [POIs] WHERE NOT ((@maxLOD < [MinLOD]) OR (@minLOD > [MaxLOD])) AND (@bounds.STIntersects([Location]) = 1) END END The 'POI' table has an index on MinLOD, MaxLOD, and a spatial index on Location.

    Read the article

  • How Do I Escape Apostrophes in Field Valued in SQL Server?

    - by Mikecancook
    I asked a question a couple days ago about creating INSERTs by running a SELECT to move data to another server. That worked great until I ran into a table that has full on HTML and apostrophes in it. What's the best way to deal with this? Lucking there aren't too many rows so it is feasible as a last resort to 'copy and paste'. But, eventually I will need to do this and the table by that time will probably be way too big to copy and paste these HTML fields. This is what I have now: select 'Insert into userwidget ([Type],[UserName],[Title],[Description],[Data],[HtmlOutput],[DisplayOrder],[RealTime],[SubDisplayOrder]) VALUES (' + ISNULL('N'''+Convert(varchar(8000),Type)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Username)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Title)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Description)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Data)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),HTMLOutput)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),DisplayOrder)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),RealTime)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),SubDisplayOrder)+'''','NULL') + ')' from userwidget Which is works fine except those pesky apostrophes in the HTMLOutput field. Can I escape them by having the query double up on the apostrophes or is there a way of encoding the field result so it won't matter?

    Read the article

  • jQuery "growl-like" effect in VB.net

    - by StealthRT
    Hey all, i have made a simple form that mimiks the jQuery "GROWL" effect seen here http://www.sandbox.timbenniks.com/projects/jquery-notice/ However, i have ran into a problem. If i have more than one call to the form to display a "Growl" then it just refreshes the same form with whatever call i send it. In other words, i can only display one form at a time instead of having one drop down and a new one appear above it. Here is my simple form code for the "GROWL" form: Public Class msgWindow Public howLong As Integer Public theType As String Private loading As Boolean Protected Overrides Sub OnPaint(ByVal pe As System.Windows.Forms.PaintEventArgs) Dim pn As New Pen(Color.DarkGreen) If theType = "OK" Then pn.Color = Color.DarkGreen ElseIf theType = "ERR" Then pn.Color = Color.DarkRed Else pn.Color = Color.DarkOrange End If pn.Width = 2 pe.Graphics.DrawRectangle(pn, 0, 0, Me.Width, Me.Height) pn = Nothing End Sub Public Sub showMessageBox(ByVal typeOfBox As String, ByVal theMessage As String) Me.Opacity = 0 Me.Show() Me.SetDesktopLocation(My.Computer.Screen.WorkingArea.Width - 350, 15) Me.loading = True theType = typeOfBox lblSaying.Text = theMessage If typeOfBox = "OK" Then Me.BackColor = Color.FromArgb(192, 255, 192) ElseIf typeOfBox = "ERR" Then Me.BackColor = Color.FromArgb(255, 192, 192) Else Me.BackColor = Color.FromArgb(255, 255, 192) End If If Len(theMessage) <= 30 Then howLong = 4000 ElseIf Len(theMessage) >= 31 And Len(theMessage) <= 80 Then howLong = 7000 ElseIf Len(theMessage) >= 81 And Len(theMessage) <= 100 Then howLong = 12000 Else howLong = 17000 End If Me.opacityTimer.Start() End Sub Private Sub opacityTimer_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles opacityTimer.Tick If Me.loading Then Me.Opacity += 0.07 If Me.Opacity >= 0.8 Then Me.opacityTimer.Stop() Me.opacityTimer.Dispose() Pause(howLong) Me.loading = False Me.opacityTimer.Start() End If Else Me.Opacity -= 0.08 If Me.Opacity <= 0 Then Me.opacityTimer.Stop() Me.Close() End If End If End Sub Public Sub Pause(ByVal Milliseconds As Integer) Dim dTimer As Date dTimer = Now.AddMilliseconds(Milliseconds) Do While dTimer > Now Application.DoEvents() Loop End Sub End Class I call the form by this simple call: Call msgWindow.showMessageBox("OK", "Finished searching images.") Does anyone know a way where i can have the same setup but would allow me to add any number of forms without refreshing the same form over and over again? Like always, any help would be great! :) David

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

  • Would anybody recommend learning J/K/APL?

    - by ozan
    I came across J/K/APL a few months ago while working my way through some project euler problems, and was intrigued, to say the least. For every elegant-looking 20 line python solution I produced, there'd be a gobsmacking 20 character J solution that ran in a tenth of the time. I've been keen to learn some basic J, and have made a few attempts at picking up the vocabulary, but have found the learning curve to be quite steep. To those who are familiar with these languages, would you recommend investing some time to learn one (I'm thinking J in particular)? I would do so more for the purpose of satisfying my curiosity than for career advancement or some such thing. Some personal circumstances to consider, if you care to: I love mathematics, and use it daily in my work (as a mathematician for a startup) but to be honest I don't really feel limited by the tools that I use (like python + NumPy) so I can't use that excuse. I have no particular desire to work in the finance industry, which seems to be the main port of call for K users at least. Plus I should really learn C# as a next language as it's the primary language where I work. So practically speaking, J almost definitely shouldn't be the next language I learn. I'm reasonably familiar with MATLAB so using an array-based programming language wouldn't constitute a tremendous paradigm shift. Any advice from those familiar with these languages would be much appreciated.

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • Variable reference problem when loading an object from a file in Java

    - by Snail
    I have a problem with the reference of a variable when loading a saved serialized object from a data file. All the variables referencing to the same object doesn't seem to update on the change. I've made a code snipped below that illustrates the problem. Tournament test1 = new Tournament(); Tournament test2 = test1; try { FileInputStream fis = new FileInputStream("test.out"); ObjectInputStream in = new ObjectInputStream(fis); test1 = (Tournament) in.readObject(); in.close(); } catch (IOException ex){ Logger.getLogger(Frame.class.getName()).log(Level.SEVERE, null, ex); } catch (ClassNotFoundException ex){ Logger.getLogger(Frame.class.getName()).log(Level.SEVERE, null, ex); } System.out.println("test1: " + test1); System.out.println("test2: " + test2); After this code is ran test1 and test2 doesn't reference to the same object anymore. To my knowledge they should do that since in the declaration of test2 makes it a reference to test1. When test1 is updated test2 should reflect the change and return the new object when called in the code. Am I missing something essential here or have I been misstaught about how the variable references in Java works?

    Read the article

  • Hibernate: Querying objects by attributes of inherited classes

    - by MichaelD
    Hi all, I ran into a problem with Hibernate concerning queries on classes which use inheritance. Basically I've the following class hierarchy: @Entity @Table( name = "recording" ) class Recording { ClassA attributeSet; ... } @Entity @Inheritance( strategy = InheritanceType.JOINED ) @Table( name = "classA" ) public class ClassA { String Id; ... } @Entity @Table( name = "ClassB1" ) @PrimaryKeyJoinColumn( name = "Id" ) public class ClassB1 extends ClassA { private Double P1300; private Double P2000; } @Entity @Table( name = "ClassB2" ) @PrimaryKeyJoinColumn( name = "Id" ) public class ClassB2 extends ClassA { private Double P1300; private Double P3000; } The hierarchy is already given like this and I cannot change it easily. As you can see ClassB1 and ClassB2 inherit from ClassA. Both classes contain a set of attributes which sometimes even have the same names (but I can't move them to ClassA since there are possible more sub-classes which do not use them). The Recording class references one instance of one of this classes. Now my question: What I want to do is selecting all Recording objects in my database which refer to an instance of either ClassB1 or ClassB2 with e.g. the field P1300 == 15.5 (so this could be ClassB1 or ClassB2 instances since the P1300 attribute is declared in both classes). What I tried is something like this: Criteria criteria = session.createCriteria(Recording.class); criteria.add( Restrictions.eq( "attributeSet.P1300", new Double(15.5) ) ); criteria.list(); But since P1300 is not an attribute of ClassA hibernate throws an exception telling me: could not resolve property: P1300 of: ClassA How can I tell hibernate that it should search in all subclasses to find the attribute I want to filter? Thanks MichaelD

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >