Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 379/415 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • Unwanted SDL_QUIT Event on mouseclick.

    - by Anthony Clever
    I'm having a slight problem with my SDL/Opengl code, specifically, when i try to do something on a mousebuttondown event, the program sends an sdl_quit event to the stack, closing my application. I know this because I can make the program work (sans the ability to quit out of it :| ) by checking for SDL_QUIT during my event loop, and making it do nothing, rather than quitting the application. If anyone could help make my program work, while retaining the ability to, well, close it, it'd be much appreciated. Code attached below: #include "SDL/SDL.h" #include "SDL/SDL_opengl.h" void draw_polygon(); void init(); int main(int argc, char *argv[]) { SDL_Event Event; int quit = 0; GLfloat color[] = { 0.0f, 0.0f, 0.0f }; init(); glColor3fv (color); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); draw_polygon(); while(!quit) { while(SDL_PollEvent( &Event )) { switch(Event.type) { case SDL_MOUSEBUTTONDOWN: for (int i = 0; i <= sizeof(color); i++) { color[i] += 0.1f; } glColor3fv ( color ); draw_polygon(); case SDL_KEYDOWN: switch(Event.key.keysym.sym) { case SDLK_ESCAPE: quit = 1; default: break; } default: break; } } } SDL_Quit(); return 0; } void draw_polygon() { glBegin(GL_POLYGON); glVertex3f (0.25, 0.25, 0.0); glVertex3f (0.75, 0.25, 0.0); glVertex3f (0.75, 0.75, 0.0); glVertex3f (0.25, 0.75, 0.0); glEnd(); SDL_GL_SwapBuffers(); } void init() { SDL_Init(SDL_INIT_EVERYTHING); SDL_SetVideoMode( 640, 480, 32, SDL_OPENGL ); glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode( GL_PROJECTION | GL_MODELVIEW ); glLoadIdentity(); glClear (GL_COLOR_BUFFER_BIT); SDL_WM_SetCaption( "OpenGL Test", NULL ); } If it matters in this case, I'm compiling via the included compiler with Visual C++ 2008 express.

    Read the article

  • System.Data.SQLite parameter issue

    - by CasperT
    I have the following code: try { //Create connection SQLiteConnection conn = DBConnection.OpenDB(); //Verify user input, normally you give dbType a size, but Text is an exception var uNavnParam = new SQLiteParameter("@uNavnParam", SqlDbType.Text) { Value = uNavn }; var bNavnParam = new SQLiteParameter("@bNavnParam", SqlDbType.Text) { Value = bNavn }; var passwdParam = new SQLiteParameter("@passwdParam", SqlDbType.Text) {Value = passwd}; var pc_idParam = new SQLiteParameter("@pc_idParam", SqlDbType.TinyInt) { Value = pc_id }; var noterParam = new SQLiteParameter("@noterParam", SqlDbType.Text) { Value = noter }; var licens_idParam = new SQLiteParameter("@licens_idParam", SqlDbType.TinyInt) { Value = licens_id }; var insertSQL = new SQLiteCommand("INSERT INTO Brugere (navn, brugernavn, password, pc_id, noter, licens_id)" + "VALUES ('@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam')", conn); insertSQL.Parameters.Add(uNavnParam); //replace paramenter with verified userinput insertSQL.Parameters.Add(bNavnParam); insertSQL.Parameters.Add(passwdParam); insertSQL.Parameters.Add(pc_idParam); insertSQL.Parameters.Add(noterParam); insertSQL.Parameters.Add(licens_idParam); insertSQL.ExecuteNonQuery(); //Execute query //Close connection DBConnection.CloseDB(conn); //Let the user know that it was changed succesfully this.Text = "Succes! Changed!"; } catch(SQLiteException e) { //Catch error MessageBox.Show(e.ToString(), "ALARM"); } It executes perfectly, but when I view my "brugere" table, it has inserted the values: '@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam' literally. Instead of replacing them. I have tried making a breakpoint and checked the parameters, they do have the correct assigned values. So that is not the issue either. I have been tinkering with this a lot now, with no luck, can anyone help? Oh and for reference, here is the OpenDB method from the DBConnection class: public static SQLiteConnection OpenDB() { try { //Gets connectionstring from app.config const string myConnectString = "data source=data;"; var conn = new SQLiteConnection(myConnectString); conn.Open(); return conn; } catch (SQLiteException e) { MessageBox.Show(e.ToString(), "ALARM"); return null; } }

    Read the article

  • Sharing a UIView between UIViewControllers in a UITabBarController

    - by Wireless Designs
    Hi all - I have a UIScrollView that houses a gallery of images the user can scroll through. This view needs to be visible on each of three separate UIViewControllers that are housed within a UITabBarController. Right now, I have three separate UIScrollView instances in the UITabBarController subclass, and the controller manages keeping the three synchronized (when a user scrolls the one they can see, programmatically scrolling the other two to match, etc.), which is not ideal. I would like to know if there is a way to work with only ONE instance of the UIScrollView, but have it show up only in the UIViewController that the user is currently interacting with. This would completely eliminate all the synchronization code. Here is basically what I have now in the UITabBarController (which is where all this is currently managed): @interface ScrollerTabBarController : UITabBarController { FirstViewController *firstView; SecondViewController *secondView; ThirdViewController *thirdView; UIScrollView *scrollerOne; UIScrollView *scrollerTwo; UIScrollView *scrollerThree; } @property (nonatomic,retain) IBOutlet FirstViewController *firstView; @property (nonatomic,retain) IBOutlet SecondViewController *secondView; @property (nonatomic,retain) IBOutlet ThirdViewController *thirdView; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerOne; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerTwo; @property (nonatomic,retain) IBOutlet UIScrollView *scrollerThree; @end @implementation ScrollerTabBarController - (void)layoutScroller:(UIScrollView *)scroller {} - (void)scrollToMatch:(UIScrollView *)scroller {} - (void)viewDidLoad { [self layoutScroller:scrollerOne]; [self layoutScroller:scrollerTwo]; [self layoutScroller:scrollerThree]; [scrollerOne setDelegate:self]; [scrollerTwo setDelegate:self]; [scrollerThree setDelegate:self]; [firstView setGallery:scrollerOne]; [secondView setGallery:scrollerTwo]; [thirdView setGallery:scrollerThree]; } - (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView { [self scrollToMatch:scrollView]; } @end The UITabBarController gets notified (as the scroll view's delegate) when the user scrolls one of the instances, and then calls methods like scrollToMatch: to sync up the other two with the user's choice. Is there something that can be done, using a many-to-one relationship on IBOutlet or something like that, to narrow this down to one instance so I'm not having to manage three scroll views? I tried keeping a single instance and moving the pointer from one view to the next using the UITabBarControllerDelegate methods (calling setGallery:nil on the current and setGallery:scrollerOne on the next each time it changed), but the scroller never moved to the other tabs. Thanks in advance!

    Read the article

  • Failed to load viewstate

    - by Jen
    OK have just started getting this error and I'm not sure why. I have a hosting page which has listview and a panel with a usercontrol. The listview loads up records with a linkbutton. You click the link button to edit that particular record - which gets loaded up in the formview (within the usercontrol) which goes to edit mode. After an update occurs in the formview I'm triggering an event which my hosting page is listening for. The hosting page then rebinds the listview to show the updated data. So this all works - but when I then go to click on a different linkbutton I get the below error: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.3) Timestamp: Fri, 18 Jun 2010 03:15:54 UTC Message: Sys.WebForms.PageRequestManagerServerErrorException: Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request. Line: 4723 Char: 21 Code: 0 URI: http://localhost:1951/AdminWebSite/ScriptResource.axd?d=yfdLw4zYs0bqYqs1arL-htap1ceeKCyW1EXhrhMZy_AqJ36FUpx8b2pzMKL6V7ebYsgJDVm_sZ_ykV1hNtFqgYcJCLLtardHm9-yyA7zC4k1&t=ffffffffec2d9970 Any suggestions as to what is actually wrong?? My event listener does this: protected void RatesEditDate1_EditDateRateUpdated() { if (IsDateRangeValid(txtDisplayFrom, txtDisplayTo)) { PropertyAccommodationRates1.DataBind(); } else { pnlViewAccommodationRates.Visible = false; } divEditRate.Visible = false; } When I click my link button - it should be hitting this but the second time round it errors before hitting the breakpoint: protected void RatesEditDate1_EditDateRateSelected(DateTime theDateTime) { // make sure everything else is invisible pnlAddAccommodation.Visible = false; pnlViewEditAccommodations.Visible = false; RatesEditDate1.TheDateTime = theDateTime; RatesEditDate1.PropertyID = (int)Master.PropertyId; if (!String.IsNullOrEmpty(Accommodations1.SelectedValue)) { RatesEditDate1.AccommodationTypeID = Convert.ToInt32(Accommodations1.SelectedValue); } else { RatesEditDate1.AccommodationTypeID = 0; } divEditRate.Visible = true; } So my listview appears to be being rebound successfully - I can see my changed data.. I just don't know why its complaining about viewstate when I click on the linkbutton. Or is there a better way to update the data in my listview? My listview and formview are bound to objectdata sources (in case that matters) Thanks for the help!

    Read the article

  • Sharepoint 2007 - cant find my modifications to web.config in SpWebApplication.WebConfigModification

    - by user303672
    Hi, I cant seem to find the modifications I made to web.config in my FeatureRecievers Activated event. I try to get the modifications from the SpWebApplication.WebConfigModifications collection in the deactivate event, but this is always empty.... And the strangest thing is that my changes are still reverted after deactivating the feature... My question is, should I not be able to view all changes made to the web.config files when accessing the SpWebApplication.WebConfigModifications collection in the Deactivating event? How should I go about to remove my changes explicitly? public class FeatureReciever : SPFeatureReceiver { private const string FEATURE_NAME = "HelloWorld"; private class Modification { public string Name; public string XPath; public string Value; public SPWebConfigModification.SPWebConfigModificationType ModificationType; public bool createOnly; public Modification(string name, string xPath, string value, SPWebConfigModification.SPWebConfigModificationType modificationType, bool createOnly) { Name = name; XPath = xPath; Value = value; ModificationType = modificationType; this.createOnly = createOnly; } } private Modification[] modifications = { new Modification("connectionStrings", "configuration", "<connectionStrings/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, true), new Modification("add[@name='ConnectionString'][@connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass']", "configuration/connectionStrings", "<add name='ConnectionString' connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass'/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, false) }; public override void FeatureActivated(SPFeatureReceiverProperties properties) { SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite; SPWebApplication webApplication = siteCollection.WebApplication; siteCollection.RootWeb.Title = "Set from activating code at " + DateTime.Now.ToString(); foreach (Modification entry in modifications) { SPWebConfigModification webConfigModification = CreateModification(entry); webApplication.WebConfigModifications.Add(webConfigModification); } webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications(); webApplication.WebService.Update(); } public override void FeatureDeactivating(SPFeatureReceiverProperties properties) { SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite; SPWebApplication webApplication = siteCollection.WebApplication; siteCollection.RootWeb.Title = "Set from deactivating code at " + DateTime.Now.ToString(); IList<SPWebConfigModification> modifications = webApplication.WebConfigModifications; foreach (SPWebConfigModification modification in modifications) { if (modification.Owner == FEATURE_NAME) { webApplication.WebConfigModifications.Remove(modification); } } webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications(); webApplication.WebService.Update(); } public override void FeatureInstalled(SPFeatureReceiverProperties properties) { } public override void FeatureUninstalling(SPFeatureReceiverProperties properties) { } private SPWebConfigModification CreateModification(Modification entry) { SPWebConfigModification spWebConfigModification = new SPWebConfigModification() { Name = entry.Name, Path = entry.XPath, Owner = FEATURE_NAME, Sequence = 0, Type = entry.ModificationType, Value = entry.Value }; return spWebConfigModification; } } Thanks for your time. /Hans

    Read the article

  • surfaceview + glsurfaceview + framelayout

    - by pohtzeyun
    Hi, I'm new at this (java and opengl) so please bear with me if the answer to the question is simple. :) I'm trying to get a camera preview screen with the ability to display 3d objects simultaneously. Having gone through the samples at the api demos, I thought combining the code for the the examples at the api demo would suffice. But somehow its not working. The forces me to shut down upon startup and the error is mentioned as null pointer exception. Could someone share with me where did I go wrong and how to proceed from there. How I did the combination for the code is as shown below: myoverview.xml <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"> <android.opengl.GLSurfaceView android:id="@+id/cubes" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="fill_parent"/> <SurfaceView android:id="@+id/camera" android:layout_width="fill_parent" android:layout_height="fill_parent"/> </FrameLayout> myoverview.java import android.app.Activity; import android.os.Bundle; import android.view.SurfaceView; import android.view.Window; public class MyOverView extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // camera view as the background SurfaceView cameraView = (SurfaceView) findViewById(R.id.camera); cameraView = new CameraView(this); // visual of both cubes GLSurfaceView cubesView = (GLSurfaceView) findViewById(R.id.cubes); cubesView = new GLSurfaceView(this); cubesView.setRenderer(new CubeRenderer(false)); // set view setContentView(R.layout.myoverview); } } GLSurfaceView.java import android.content.Context; class GLSurfaceView extends android.opengl.GLSurfaceView { public GLSurfaceView(Context context) { super(context); } } NOTE : I didnt list the rest of the files as they are just copies of the api demos. The cameraView refers to the camerapreview.java example and the CubeRenderer refers to the CubeRenderer.java and Cube.java example. Any help would be appreciated as I've been stuck at this for a couple of days :p Thanks Sorry, didnt realise that the coding was out of place due to formatting mistakes. :p

    Read the article

  • Why is code quality not popular?

    - by Peter Kofler
    I like my code being in order, i.e. properly formatted, readable, designed, tested, checked for bugs, etc. In fact I am fanatic about it. (Maybe even more than fanatic...) But in my experience actions helping code quality are hardly implemented. (By code quality I mean the quality of the code you produce day to day. The whole topic of software quality with development processes and such is much broader and not the scope of this question.) Code quality does not seem popular. Some examples from my experience include Probably every Java developer knows JUnit, almost all languages implement xUnit frameworks, but in all companies I know, only very few proper unit tests existed (if at all). I know that it's not always possible to write unit tests due to technical limitations or pressing deadlines, but in the cases I saw, unit testing would have been an option. If a developer wanted to write some tests for his/her new code, he/she could do so. My conclusion is that developers do not want to write tests. Static code analysis is often played around in small projects, but not really used to enforce coding conventions or find possible errors in enterprise projects. Usually even compiler warnings like potential null pointer access are ignored. Conference speakers and magazines would talk a lot about EJB3.1, OSGI, Cloud and other new technologies, but hardly about new testing technologies or tools, new static code analysis approaches (e.g. SAT solving), development processes helping to maintain higher quality, how some nasty beast of legacy code was brought under test, ... (I did not attend many conferences and it propably looks different for conferences on agile topics, as unit testing and CI and such has a higer value there.) So why is code quality so unpopular/considered boring? EDIT: Thank your for your answers. Most of them concern unit testing (and has been discussed in a related question). But there are lots of other things that can be used to keep code quality high (see related question). Even if you are not able to use unit tests, you could use a daily build, add some static code analysis to your IDE or development process, try pair programming or enforce reviews of critical code.

    Read the article

  • NSString drawAtPoint Crash on the iPhone (NSString drawAtPoint)

    - by Kyle
    Hey. I have a very simple text output to buffer system which will crash randomly. It will be fine for DAYS, then sometimes it'll crash a few times in a few minutes. The callstack is almost exactly the same for other guys who use higher level controls: http://discussions.apple.com/thread.jspa?messageID=7949746 http://stackoverflow.com/questions/1978997/iphone-app-crashed-assertion-failed-function-evictglyphentryfromstrike-file It crashes at the line (below as well in drawTextToBuffer()): [nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont]; I have the same call of "evict_glyph_entry_from_cache" with the abort calls immediately following it. Apparently it happens to other people. I can say that my NSString* is perfectly fine at the time of the crash. I can read the text from the debugger just fine. static CGColorSpaceRef curColorSpace; static CGContextRef myContext; static float w, h; static int iFontSize; static NSString* sFontName; static UIFont* clFont; static int iLineHeight; unsigned long* txb; /* 256x256x4 Buffer */ void selectFont(int iSize, NSString* sFont) { iFontSize = iSize; clFont = [UIFont fontWithName:sFont size:iFontSize]; iLineHeight = (int)(ceil([clFont capHeight])); } void initText() { w = 256; h = 256; txb = (unsigned long*)malloc_(w * h * 4); curColorSpace = CGColorSpaceCreateDeviceRGB(); myContext = CGBitmapContextCreate(txb, w, h, 8, w * 4, curColorSpace, kCGImageAlphaPremultipliedLast); selectFont(12, @"Helvetica"); } void drawTextToBuffer(NSString* nsString) { CGContextSaveGState(myContext); CGContextSetRGBFillColor(myContext, 1, 1, 1, 1); UIGraphicsPushContext(myContext); /* This line will crash. It crashes even with constant Strings.. At the time of the crash, the pointer to nsString is perfectly fine. The data looks fine! */ [nsString drawAtPoint:CGPointMake(0, 0) withFont:clFont]; UIGraphicsPopContext(); CGContextRestoreGState(myContext); } It will happen with other non-unicode supporting methods as well such as CGContextShowTextAtPoint(); the callstack is similar with that as well. Is this any kind of known issue with the iPhone? Or, perhaps, can something outside of this cause be causing an exception in this particular call (drawAtPoint)?

    Read the article

  • How do you Send More that 20 Parameters to a Stored Procedure Using ODP.Net?

    - by discwiz
    Switching from Microsofts Oracle Driver to ODP.NET version 10.2.0.100. After changing the data types to OracleDBTypes in a procedure, that worked perficetly using System.Data.OracleClient, the procedure fails if we try and pass in more that 20 parameters. The error returned is: ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'ADD_TARP_EVENT' ORA-06550: line 1, column 7: PL/SQL: Statement ignorede If we reduce the number of parameters to less than 20 it works. Is this a known issue? Thanks, Dave Here the code for creating the parameters: Shared Function CreateTarpEventCommand(ByVal aTarpEvent As TARPEventType) As OracleCommand Dim cmd As New OracleCommand With aTarpEvent cmd.Parameters.Add(New OracleParameter("I_facID_C", OracleDbType.Char)).Value = .FacilityShortName cmd.Parameters.Add(New OracleParameter("I_facName_VC", OracleDbType.Varchar2)).Value = .FacilityLongName cmd.Parameters.Add(New OracleParameter("I_client_VC", OracleDbType.Varchar2)).Value = .ComputerNameTarpIsRunningOn cmd.Parameters.Add(New OracleParameter("I_TARP_Version_VC", OracleDbType.Varchar2)).Value = .TarpVersionNumber cmd.Parameters.Add(New OracleParameter("I_NAS_Type_VC", OracleDbType.Varchar2)).Value = .FacilityNASSystemType cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Callsign_VC", OracleDbType.Varchar2)).Value = .Aircraft1Callsign If .Aircraft1Type Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Type_VC", OracleDbType.Varchar2)).Value = .Aircraft1Type End If If .Aircraft1Category Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Cat_VC", OracleDbType.Varchar2)).Value = .Aircraft1Category End If cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Callsign_VC", OracleDbType.Varchar2)).Value = .Aircraft2Callsign If .Aircraft2Type Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Type_VC", OracleDbType.Varchar2)).Value = .Aircraft2Type End If If .Aircraft2Category Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Cat_VC", OracleDbType.Varchar2)).Value = .Aircraft2Category End If If .SensorShortName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Sensor_Name_VC", OracleDbType.Varchar2)).Value = .SensorShortName End If If .TarpConfigurationName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_TARP_Config_Name_VC", OracleDbType.Varchar2)).Value = .TarpConfigurationName End If If .EntryCreatorID Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Create_VC", OracleDbType.Varchar2)).Value = .EntryCreatorID End If If .LogAction Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Log_Action_VC", OracleDbType.Varchar2)).Value = .LogAction End If cmd.Parameters.Add(New OracleParameter("I_TARP_Mode_VC", OracleDbType.Varchar2)).Value = .TarpOperatingMode cmd.Parameters.Add(New OracleParameter("I_Min_Loss_N", OracleDbType.Decimal)).Value = .ClosestMeasureOfLoSS If .MapName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_MAP_NAME_VC", OracleDbType.Varchar2)).Value = .MapName End If If .TarpConfigurationFileHash Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_CONFIG_HASH_VC", OracleDbType.Varchar2)).Value = .TarpConfigurationFileHash End If Dim aDate As OracleDate = CType(.LossEventsMessages(0).LossEventTime, System.DateTime) cmd.Parameters.Add(New OracleParameter("I_FIRST_LOSS_EVENT_DATE", OracleDbType.Date)).Value = aDate cmd.Parameters.Add(New OracleParameter("I_FIRST_LOSS_EVENT_MS_N", OracleDbType.Int32)).Value = .LossEventsMessages(0).LossEventMilliSeconds If .ZippedMapFiles Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Map_File_BL", OracleDbType.Blob)).Value = .ZippedMapFiles End If cmd.Parameters.Add(New OracleParameter("I_TARP_Package_BL", OracleDbType.Blob)).Value = .ZippedTarpPackageWithoutMaps cmd.Parameters.Add(New OracleParameter("rs_RESULTS", OracleDbType.RefCursor)).Direction = ParameterDirection.Output End With Return cmd End Function And here is the code for executing the procedure: Dim workingDataSet As New DataSet Dim oracleConnection As New OracleConnection Dim cmd As New OracleCommand Dim oracleDataAdapter As New OracleDataAdapter Try Using oracleConnection oracleConnection.ConnectionString = System.Configuration.ConfigurationManager.AppSettings("MasterConnectionODT") cmd = HelperDB.CreateTarpEventCommand(TarpEvent) cmd.Connection = oracleConnection cmd.CommandText = "LOADER.ADD_TARP_EVENT" cmd.CommandType = CommandType.StoredProcedure Using oracleConnection oracleConnection.Open() Dim aTransation As OracleTransaction = oracleConnection.BeginTransaction(IsolationLevel.ReadCommitted) Try Using oracleDataAdapter oracleDataAdapter = New OracleDataAdapter(cmd) oracleDataAdapter.TableMappings.Add("Results", "rs_Max") oracleDataAdapter.Fill(workingDataSet) ....

    Read the article

  • Hooking DirectX EndScene from an injected DLL

    - by Etan
    I want to detour EndScene from an arbitrary DirectX 9 application to create a small overlay. As an example, you could take the frame counter overlay of FRAPS, which is shown in games when activated. I know the following methods to do this: Creating a new d3d9.dll, which is then copied to the games path. Since the current folder is searched first, before going to system32 etc., my modified DLL gets loaded, executing my additional code. Downside: You have to put it there before you start the game. Same as the first method, but replacing the DLL in system32 directly. Downside: You cannot add game specific code. You cannot exclude applications where you don't want your DLL to be loaded. Getting the EndScene offset directly from the DLL using tools like IDA Pro 4.9 Free. Since the DLL gets loaded as is, you can just add this offset to the DLL starting address, when it is mapped to the game, to get the actual offset, and then hook it. Downside: The offset is not the same on every system. Hooking Direct3DCreate9 to get the D3D9, then hooking D3D9-CreateDevice to get the device pointer, and then hooking Device-EndScene through the virtual table. Downside: The DLL cannot be injected, when the process is already running. You have to start the process with the CREATE_SUSPENDED flag to hook the initial Direct3DCreate9. Creating a new Device in a new window, as soon as the DLL gets injected. Then, getting the EndScene offset from this device and hooking it, resulting in a hook for the device which is used by the game. Downside: as of some information I have read, creating a second device may interfere with the existing device, and it may bug with windowed vs. fullscreen mode etc. Same as the third method. However, you'll do a pattern scan to get EndScene. Downside: doesn't look that reliable. How can I hook EndScene from an injected DLL, which may be loaded when the game is already running, without having to deal with different d3d9.dll's on other systems, and with a method which is reliable? How does FRAPS for example perform it's DirectX hooks? The DLL should not apply to all games, just to specific processes where I inject it via CreateRemoteThread.

    Read the article

  • C++ template-function -> passing a template-class as template-argument

    - by SeMa
    Hello, i try to make intensive use of templates to wrap a factory class: The wrapping class (i.e. classA) gets the wrapped class (i.e. classB) via an template-argument to provide 'pluggability'. Additionally i have to provide an inner-class (innerA) that inherits from the wrapped inner-class (innerB). The problem is the following error-message of the g++ "gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5)": sebastian@tecuhtli:~/Development/cppExercises/functionTemplate$ g++ -o test test.cpp test.cpp: In static member function ‘static classA<A>::innerA<iB>* classA<A>::createInnerAs(iB&) [with iB = int, A = classB]’: test.cpp:39: instantiated from here test.cpp:32: error: dependent-name ‘classA::innerA<>’ is parsed as a non-type, but instantiation yields a type test.cpp:32: note: say ‘typename classA::innerA<>’ if a type is meant As you can see in the definition of method createInnerBs, i intend to pass a non-type argument. So the use of typename is wrong! The code of test.cpp is below: class classB{ public: template < class iB> class innerB{ iB& ib; innerB(iB& b) :ib(b){} }; template<template <class> class classShell, class iB> static classShell<iB>* createInnerBs(iB& b){ // this function creates instances of innerB and its subclasses, // because B holds a certain allocator return new classShell<iB>(b); } }; template<class A> class classA{ // intention of this class is meant to be a pluggable interface // using templates for compile-time checking public: template <class iB> class innerA: A::template innerB<iB>{ innerA(iB& b) :A::template innerB<iB>(b){} }; template<class iB> static inline innerA<iB>* createInnerAs(iB& b){ return A::createInnerBs<classA<A>::template innerA<> >(b); // line 32: error occurs here } }; typedef classA<classB> usable; int main (int argc, char* argv[]){ int a = 5; usable::innerA<int>* myVar = usable::createInnerAs(a); return 0; } Please help me, i have been faced to this problem for several days. Is it just impossible, what i'm trying to do? Or did i forgot something? Thanks, Sebastian

    Read the article

  • How do i rotate a CALayer around a diagonal line?

    - by Mattias Wadman
    Hi. I'm trying to implement a flip animation to be used in board game like iPhone-application. The animation is supposed to look like a game piece that rotates and changes to the color of its back (kind of like an Reversi piece). I've managed to create an animation that flips the piece around its orthogonal axis, but when I try to flip it around a diagonal axis by changing the rotation around the z-axis the actual image also gets rotated (not surprisingly). Instead I would like to rotate the image "as is" around a diagonal axis. I have tried to change layer.sublayerTransform but with no success. Here is my current implementation. It works by doing a trick to resolve the issue of getting a mirrored image at the end of the animation. The solution is to not actually rotate the layer 180 degrees, instead it rotates it 90 degrees, changes image and then rotates it back. + (void)flipLayer:(CALayer *)layer toImage:(CGImageRef)image withAngle:(double)angle { const float duration = 0.5f; CAKeyframeAnimation *diag = [CAKeyframeAnimation animationWithKeyPath:@"transform.rotation.z"]; diag.duration = duration; diag.values = [NSArray arrayWithObjects: [NSNumber numberWithDouble:angle], [NSNumber numberWithDouble:0.0f], nil]; diag.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:1.0f], nil]; diag.calculationMode = kCAAnimationDiscrete; CAKeyframeAnimation *flip = [CAKeyframeAnimation animationWithKeyPath:@"transform.rotation.y"]; flip.duration = duration; flip.values = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:M_PI / 2], [NSNumber numberWithDouble:0.0f], nil]; flip.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:0.5f], [NSNumber numberWithDouble:1.0f], nil]; flip.calculationMode = kCAAnimationLinear; CAKeyframeAnimation *replace = [CAKeyframeAnimation animationWithKeyPath:@"contents"]; replace.duration = duration / 2; replace.beginTime = duration / 2; replace.values = [NSArray arrayWithObjects:(id)image, nil]; replace.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], nil]; replace.calculationMode = kCAAnimationDiscrete; CAAnimationGroup *group = [CAAnimationGroup animation]; group.removedOnCompletion = NO; group.duration = duration; group.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; group.animations = [NSArray arrayWithObjects:diag, flip, replace, nil]; group.fillMode = kCAFillModeForwards; [layer addAnimation:group forKey:nil]; }

    Read the article

  • jQuery UI portlets - toggle portlets to save to a cookie (half way there!)

    - by Gareth
    Hi, I'm a bit of a jQuery n00b so please excuse me if this seems like a stupid question. I am creating a site using the jQuery UI more specifically the sortable portlets. I have been able store whether or not a portlet is has been open or closed to a cookie. This is done using the following code. The slider ID is currently where the controls are stored to turn each portlet on and off. var cookie = $.cookie("hidden"); var hidden = cookie ? cookie.split("|").getUnique() : []; var cookieExpires = 7; // cookie expires in 7 days, or set this as a date object to specify a date // Remember content that was hidden $.each( hidden, function(){ var pid = this; //parseInt(this,10); $('#' + pid).hide(); $("#slider div[name='" + pid + "']").addClass('add'); }) // Add Click functionality $("#slider div").click(function(){ $(this).toggleClass('add'); var el = $("div#" + $(this).attr('name')); el.toggle(); updateCookie(el); }); $('a.toggle').click(function(){ $(this).parents(".portlet").hide(); // *** Below line just needs to select the correct 'id' and insert as selector i.e ('#slider div#block-1') and then update cookie! *** $('#slider div').addClass('add'); }); // Update the Cookie function updateCookie(el){ var indx = el.attr('id'); var tmp = hidden.getUnique(); if (el.is(':hidden')) { // add index of widget to hidden list tmp.push(indx); } else { // remove element id from the list tmp.splice( tmp.indexOf(indx) , 1); } hidden = tmp.getUnique(); $.cookie("hidden", hidden.join('|'), { expires: cookieExpires } ); } }) // Return a unique array. Array.prototype.getUnique = function() { var o = new Object(); var i, e; for (i = 0; e = this[i]; i++) {o[e] = 1}; var a = new Array(); for (e in o) {a.push (e)}; return a; } What I would like to do is also add a [x] into the corner of each portlet to give the user another way of hiding it but I'm unable to currently get this to store within the Cookie using the code above. Can anyone give me a pointer of how I would do this? Thanks in advance! Gareth

    Read the article

  • how to send classes defined in .proto (protocol-buffers) over a socket

    - by make
    Hi, I am trying to send a proto over a socket, but i am getting segmentation error. Could someone please help and tell me what is wrong with this example? file.proto message data{ required string x1 = 1; required uint32 x2 = 2; required float x3 = 3; } client.cpp ... // class defined in proto data data_snd; data data_rec; char *y1 = "operation1"; uint32_t y2 = 123 ; float y3 = 3.14; // assigning data to send() data_snd.set_x1(y1); data_snd.set_x2(y2); data_snd.set_x3(y3); //sending data to the server if (send(socket, &data_snd, sizeof(data_snd), 0) < 0) { cerr << "send() failed" ; exit(1); } //receiving data from the client if (recv(socket, &data_rec, sizeof(data_rec), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << data_rec.x1() << "\n"; cout << data_rec.x2() << "\n"; cout << data_rec.x3() << "\n"; ... server.cpp ... //receiving data from the client if (recv(socket, &data_rec, sizeof(data_rec), 0) < 0) { cerr << "recv() failed"; exit(1); } //printing received data cout << data_rec.x1() << "\n"; cout << data_rec.x2() << "\n"; cout << data_rec.x3() << "\n"; // assigning data to send() data_snd.set_x1(data_rec.x1()); data_snd.set_x2(data_rec.x2()); data_snd.set_x3(data_rec.x3()); //sending data to the server if (send(socket, &data_snd, sizeof(data_snd), 0) < 0) { cerr << "send() failed" ; exit(1); } ... Thanks for help and replies-

    Read the article

  • Objective-C wrapper API design methodology

    - by Wade Williams
    I know there's no one answer to this question, but I'd like to get people's thoughts on how they would approach the situation. I'm writing an Objective-C wrapper to a C library. My goals are: 1) The wrapper use Objective-C objects. For example, if the C API defines a parameter such as char *name, the Objective-C API should use name:(NSString *). 2) The client using the Objective-C wrapper should not have to have knowledge of the inner-workings of the C library. Speed is not really any issue. That's all easy with simple parameters. It's certainly no problem to take in an NSString and convert it to a C string to pass it to the C library. My indecision comes in when complex structures are involved. Let's say you have: struct flow { long direction; long speed; long disruption; long start; long stop; } flow_t; And then your C API call is: void setFlows(flow_t inFlows[4]); So, some of the choices are: 1) expose the flow_t structure to the client and have the Objective-C API take an array of those structures 2) build an NSArray of four NSDictionaries containing the properties and pass that as a parameter 3) create an NSArray of four "Flow" objects containing the structure's properties and pass that as a parameter My analysis of the approaches: Approach 1: Easiest. However, it doesn't meet the design goals Approach 2: For some reason, this seems to me to be the most "Objective-C" way of doing it. However, each element of the NSDictionary would have to be wrapped in an NSNumber. Now it seems like we're doing an awful lot just to pass the equivalent of a struct. Approach 3: Seems the cleanest to me from an object-oriented standpoint and the extra encapsulation could come in handy later. However, like #2, it now seems like we're doing an awful lot (creating an array, creating and initializing objects) just to pass a struct. So, the question is, how would you approach this situation? Are there other choices I'm not considering? Are there additional advantages or disadvantages to the approaches I've presented that I'm not considering?

    Read the article

  • How to use c++0x thread in Android NDK?

    - by m-ric
    I am trying to compile this simple program with android-ndk-r8b: jni/hello_jni.cpp #include <iostream> #include <thread> void hello() { std::cout << "Hi i'm a thread!!!" << std::endl; } int main() { std::thread th(hello); th.join(); return 0; } jni/Application.mk APP_OPTIM := release APP_MODULES := hello_thread APP_STL := gnustl_static jni/Android.mk LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_CPPFLAGS += -std=c++0x -frtti LOCAL_MODULE := hello_thread LOCAL_LDLIBS := -L$(SYSROOT)/usr/lib -pthread LOCAL_SRC_FILES := hello_thread.cpp include $(BUILD_EXECUTABLE) ndk-build returns me an error arguin that 'thread' is not a member of 'std'. I issued ndk-build -n to get the compilation command and issued it alone in my shell: /home/evigier/android-ndk-r8b/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-g++ -MMD -MP -MF /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o.d -fpic -ffunction-sections -funwind-tables -fstack-protector -D__ARM_ARCH_5__ -D__ARM_ARCH_5T__ -D__ARM_ARCH_5E__ -D__ARM_ARCH_5TE__ -march=armv5te -mtune=xscale -msoft-float -fno-exceptions -fno-rtti -mthumb -Os -fomit-frame-pointer -fno-strict-aliasing -finline-limit=64 -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include -I/home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi/include -I/home/evigier/eclipse_workspace/hello_thread/jni -DANDROID -Wa,--noexecstack -std=c++0x -frtti -O2 -DNDEBUG -g -I/home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include -c /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp -o /home/evigier/eclipse_workspace/hello_thread/obj/local/armeabi/objs/hello_thread/hello_thread.o Compile++ thumb : hello_thread <= hello_thread.cpp In file included from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/stdio.h:55:0, from /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/wchar.h:33, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/cwchar:46, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/bits/postypes.h:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iosfwd:42, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ios:39, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/ostream:40, from /home/evigier/android-ndk-r8b/sources/cxx-stl/gnu-libstdc++/4.6/include/iostream:40, from jni/hello_thread.cpp:4: /home/evigier/android-ndk-r8b/platforms/android-14/arch-arm/usr/include/sys/types.h:124:9: error: 'uint64_t' does not name a type /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp: In function 'int main()': /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:5: error: 'thread' is not a member of 'std' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:14:17: error: expected ';' before 'th' /home/evigier/eclipse_workspace/hello_thread/jni/hello_thread.cpp:15:5: error: 'th' was not declared in this scope I read a lot of threads/questions about POSIX threads and C++ threads, but still cannot find my answer. My arm-linux-androideabi/include/c++/4.6/thread file defines class thread in std only: #if defined(_GLIBCXX_HAS_GTHREADS) && defined(_GLIBCXX_USE_C99_STDINT_TR1) They don't seem to be defined in my sdk (c++config.h). But how can I possibly turn them on safely? Do i need to compile my own toolchain to use (non-p)threads? My host computer is : Linux evigier-ThinkPad-X220 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • How to update a user created Bitmap in the Windows API

    - by gamernb
    In my code I quickly generate images on the fly, and I want to display them as quickly as possible. So the first time I create my image, I create a new BITMAP, but instead of deleting the old one and creating a new one for every subsequent image, I just want to copy my data back into the existing one. Here is my code to do both the initial creation and the updating. The creation works just fine, but the updating one doesn't work. BITMAPINFO bi; HBITMAP Frame::CreateBitmap(HWND hwnd, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); bi.bmiHeader.biWidth = width; bi.bmiHeader.biHeight = height; bi.bmiHeader.biPlanes = 1; bi.bmiHeader.biBitCount = 24; bi.bmiHeader.biCompression = BI_RGB; ZeroMemory(bi.bmiColors, sizeof(RGBQUAD)); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); HBITMAP bitmap = CreateDIBitmap(GetDC(hwnd), &bi.bmiHeader, CBM_INIT, newPixels, &bi, DIB_RGB_COLORS); delete newPixels; return bitmap; } and void Frame::UpdateBitmap(HWND hwnd, HBITMAP bitmap, int tol1, int tol2, bool useWhite, bool useBackground) { ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER)); bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); HDC hdc = GetDC(hwnd); if(!GetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, NULL, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't get base image info!", "Error!", MB_ICONEXCLAMATION | MB_OK); // Allocate memory for bitmap bits int size = height * width; Pixel* newPixels = new Pixel[size]; // Recompute the output //memcpy(newPixels, pixels, size*3); ComputeOutput(newPixels, tol1, tol2, useWhite, useBackground); // Push back to windows if(!SetDIBits(hdc, bitmap, 0, bi.bmiHeader.biHeight, newPixels, &bi, DIB_RGB_COLORS)) MessageBox(NULL, "Can't set pixel data!", "Error!", MB_ICONEXCLAMATION | MB_OK); delete newPixels; } where the Pixel struct is just this: struct Pixel { unsigned char b, g, r; }; Why does my update function not work. I always get the MessageBox for "Can't set pixel data!" I used code similar to this when I was loading in the original bitmap from file, then editing the data, but now when I manually create it, it doesn't work.

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • Memory allocation problem with SVMs in OpenCV

    - by worksintheory
    Hi, I've been using OpenCV happily for a while, but now I have a problem which has bugged me for quite some time. The following code is reasonably minimal example of my problem: #include <cv.h> #include <ml.h> using namespace cv; int main(int argc, char **argv) { int sampleCountForTesting = 2731; //BROKEN: Breaks svm.train_auto(...) for values of 2731 or greater! Mat trainingData( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); Mat trainingResponses( sampleCountForTesting, 1, CV_32FC1, Scalar::all(0.0) ); for(int j = 0; j < 6; j++) { trainingData.at<float>( j, 0 ) = (float) (j%2); trainingResponses.at<float>( j, 0 ) = (float) (j%2); //Setting a few values so I don't get a "single class" error } CvSVMParams svmParams( 100, //100 is CvSVM::C_SVC, 2, //2 is CvSVM::RBF, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, NULL, TermCriteria( TermCriteria::MAX_ITER | TermCriteria::EPS, 2, 1.0 ) ); CvSVM svm = CvSVM(); svm.train_auto( trainingData, trainingResponses, Mat(), Mat(), svmParams ); return 0; } I just create matrices to hold the training data and responses, then set a few entries to some value other than zero, then run the SVM. But it breaks whenever there are 2731 rows or more: OpenCV Error: One of arguments' values is out of range (requested size is negative or too big) in cvMemStorageAlloc, file [omitted]/opencv/OpenCV-2.2.0/modules/core/src/datastructs.cpp, line 332 With fewer rows, it seems to be fine and a classifier trained in a similar manner to the above seems to be giving reasonable output. Am I doing something wrong? I'm pretty sure it's not actually anything to do with lack of memory, as I've got 6GB and also the code works fine when the data has 2730 rows and 10000 columns, which is a much bigger allocation. I'm running OpenCV 2.2 on OSX 10.6 and initially I thought the problem might be related to this bug if for some reason the fix wasn't included in the MacPorts version. Now I've also tried downloading the most recent stable version from the OpenCV site and building with cmake and using that, but I still get the same error, and the fix is definitely included in that version. Any help would be much appreciated! Thanks,

    Read the article

  • How does mysql define DISTINCT() in reference documentation

    - by goran
    EDIT: This question is about finding definitive reference to MySQL syntax on SELECT modifying keywords and functions. /EDIT AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax, but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best EDIT After asking the same question on mysql forums I learned that while parsing the SQL mysql does not care about whitespace between functions and column names (but I am still missing a reference). As it seems you can have whitespace between functions and the parenthesis SELECT LEFT (field1,1), field2... and get mysql to understand it as SELECT LEFT(field,1) Similarly SELECT DISTINCT(field1), field2... seems to get decomposed to SELECT DISTINCT (field1), field2... and then DISTINCT is taken not as some undefined (or undocumented) function, but as SELECT modifying keyword and the parenthesis around field1 are evaluated as if they were part of field expression. It would be great if someone would have a pointer to documentation where it is stated that the whitespace between functions and parenthesis is not significant or to provide links to apropriate MySQL forums, mailing lists where I could raise a question to put this into reference. EDIT I have found a reference to server option IGNORE SPACE. It states that "The IGNORE SPACE SQL mode can be used to modify how the parser treats function names that are whitespace-sensitive", later on it states that recent versions of mysql have reduced this number from 200 to 30. One of the remaining 30 is COUNT for example. With IGNORE SPACE enabled both SELECT COUNT(*) FROM mytable; SELECT COUNT (*) FROM mytable; are legal. So if this is an exception, I am left to conclude that normally functions ignore space by default. If functions ignore space by default then if the context is ambiguous, such as for the first function on a first item of the select expression, then they are not distinguishable from keywords and the error can not be thrown and MySQL must accept them as keywords. Still, my conclusions feel like they have lot of assumptions, I would still be grateful and accept any pointers to see where to follow up on this.

    Read the article

  • PHP SASL(PECL) sasl_server_init(app) works with CLI but not with ApacheModule

    - by ZokRadonh
    I have written a simple auth script so that Webusers can type in their username and password and my PHP script verifies them by SASL. The SASL Library is initialized by php function sasl_server_init("phpfoo"). So phpfoo.conf in /etc/sasl2/ is used. phpfoo.conf: pwcheck_method: saslauthd mech_list: PLAIN LOGIN log_level: 9 So the SASL library now tries to connect to saslauthd process by socket. saslauthd command line looks like this: /usr/sbin/saslauthd -r -V -a pam -n 5 So saslauthd uses PAM to authenticate. In the php script I have created sasl connection by sasl_server_new("php", null, "myRealm"); The first argument is the servicename. So PAM uses the file /etc/pam.d/php to see for further authentication information. /etc/pam.d/php: auth required pam_mysql.so try_first_pass=0 config_file=/etc/pam.d/mysqlconf.nss account required pam_permit.so session required pam_permit.so mysqlconf.nss has all information that is needed for a useful MySQL Query to user table. All of this works perfectly when I run the script by command line. php ssasl.php But when I call the same script via webbrowser(php apache module) I get an -20 return code (SASL_NOUSER). In /var/log/messages there is May 18 15:27:12 hostname httpd2-prefork: unable to open Berkeley db /etc/sasldb2: No such file or directory I do not have anything with a Berkeley db for authentication with SASL. I think authentication using /etc/sasldb2 is the default setting. In my opinion it does not read my phpfoo.conf file. For some reason the php-apache-module ignores the parameter in sasl_server_init("phpfoo"). My first thought was that there is a permission issue. So back in shell: su -s /bin/bash wwwrun php ssasl.php "Authentication successful". - No file-permission issue. In the source of the sasl-php-extension we can find: PHP_FUNCTION(sasl_server_init) { char *name; int name_len; if (zend_parse_parameters(1 TSRMLS_CC, "s", &name, &name_len) == FAILURE) { return; } if (sasl_server_init(NULL, name) != SASL_OK) { RETURN_FALSE; } RETURN_TRUE; } This is a simple pass through of the string. Are there any differences between the PHP CLI and PHP ApacheModule version that I am not aware of? Anyway, there are some interesting log entries when I run PHP in CLI mode: May 18 15:44:48 hostname php: SQL engine 'mysql' not supported May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sqlite May 18 15:44:48 hostname php: sql_select option missing May 18 15:44:48 hostname php: auxpropfunc error no mechanism available May 18 15:44:48 hostname php: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: sql Those lines are followed by lines of saslauthd and PAM which results in authentication success.(I do not get any of them in ApacheModule mode) Looks like that he is trying auxprop pwcheck before saslauthd. I have no other .conf file in /etc/sasl2. When I change the parameter of sasl_server_init to something other then I get the same error in CLI mode as in ApacheModule mode.

    Read the article

  • C# file Decryption - Bad Data

    - by Jon
    Hi all, I am in the process of rewriting an old application. The old app stored data in a scoreboard file that was encrypted with the following code: private const String SSecretKey = @"?B?n?Mj?"; public DataTable GetScoreboardFromFile() { FileInfo f = new FileInfo(scoreBoardLocation); if (!f.Exists) { return setupNewScoreBoard(); } DESCryptoServiceProvider DES = new DESCryptoServiceProvider(); //A 64 bit key and IV is required for this provider. //Set secret key For DES algorithm. DES.Key = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Set initialization vector. DES.IV = ASCIIEncoding.ASCII.GetBytes(SSecretKey); //Create a file stream to read the encrypted file back. FileStream fsread = new FileStream(scoreBoardLocation, FileMode.Open, FileAccess.Read); //Create a DES decryptor from the DES instance. ICryptoTransform desdecrypt = DES.CreateDecryptor(); //Create crypto stream set to read and do a //DES decryption transform on incoming bytes. CryptoStream cryptostreamDecr = new CryptoStream(fsread, desdecrypt, CryptoStreamMode.Read); DataTable dTable = new DataTable("scoreboard"); dTable.ReadXml(new StreamReader(cryptostreamDecr)); cryptostreamDecr.Close(); fsread.Close(); return dTable; } This works fine. I have copied the code into my new app so that I can create a legacy loader and convert the data into the new format. The problem is I get a "Bad Data" error: System.Security.Cryptography.CryptographicException was unhandled Message="Bad Data.\r\n" Source="mscorlib" The error fires at this line: dTable.ReadXml(new StreamReader(cryptostreamDecr)); The encrypted file was created today on the same machine with the old code. I guess that maybe the encryption / decryption process uses the application name / file or something and therefore means I can not open it. Does anyone have an idea as to: A) Be able explain why this isn't working? B) Offer a solution that would allow me to be able to open files that were created with the legacy application and be able to convert them please? Thank you

    Read the article

  • VBScript Out of String space

    - by MalsiaPro
    I got the following code to capture information for files on a specified drive, I ran the script againts a 600 GB hard drive on one of our servers and after a while I get the error Out of String space; "Join". Line 34, Char 2 For this code, file script.vbs: Option Explicit Dim objFS, objFld Dim objArgs Dim strFolder, strDestFile, blnRecursiveSearch ''Dim strLines Dim strCsv ''Dim i '' i = 0 ' 'Get the commandline parameters ' Set objArgs = WScript.Arguments ' strFolder = objArgs(0) ' strDestFile = objArgs(1) ' blnRecursiveSearch = objArgs(2) '######################################## 'SPECIFY THE DRIVE YOU WANT TO SCAN BELOW '######################################## strFolder = "C:\" strDestFile = "C:\InformationOutput.csv" blnRecursiveSearch = True 'Create the FileSystemObject Set objFS=CreateObject("Scripting.FileSystemObject") 'Get the directory you are working in Set objFld = objFS.GetFolder(strFolder) 'Open the csv file Set strCsv = objFS.CreateTextFile(strDestFile, True) '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) strCsv.WriteLine "File Path,File Size,Date Created,Date Last Modified,Date Last Accessed" '' strCsv.Write Join(strLines, vbCrLf) 'Now get the file details GetFileDetails objFld, blnRecursiveSearch '' 'Close and cleanup objects '' strCsv.Close '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) '' For i = 0 to UBound(strLines) '' strCsv.WriteLine strLines(i) '' Next 'Close and cleanup objects strCsv.Close Set strCsv = Nothing Set objFld = Nothing Set strFolder = Nothing Set objArgs = Nothing '---------------------------SCAN SPECIFIED LOCATION------------------------------- Private Sub GetFileDetails(fold, blnRecursive) Dim fld, fil dim strLine(4) on error resume next If InStr(fold.Path, "System Volume Information") < 1 Then If blnRecursive Then 'Work through all the folders and subfolders For Each fld In fold.SubFolders GetFileDetails fld, True If err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path err.Clear End If Next End If 'Now work on the files For Each fil in fold.Files strLine(0) = fil.Path strLine(1) = fil.Size strLine(2) = fil.DateCreated strLine(3) = fil.DateLastModified strLine(4) = fil.DateLastAccessed strCsv.WriteLine Join(strLine, ",") if err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path & vbcrlf & "File - " & fil.Name err.Clear End If Next End If end sub Private sub LogError(strError) dim strErr 'Write the csv file Set strErr = objFS.CreateTextFile("C:\test\err.log", false) strErr.WriteLine strError strErr.Close Set strErr = nothing End Sub RunMe.cmd wscript.exe "C:\temp\script\script.vbs" How can I avoid getting this error? The server drives are quite a bit <???? and I would imagine that the CSV file would be at least 40 MB. Edit by Guffa: I commented out some lines in the code, using double ticks ('') so you can see where.

    Read the article

  • Need help implementing simple socket server using GIOService (GLib, Glib-GIO)

    - by Mark Renouf
    I'm learning the basics of writing a simple, efficient socket server using GLib. I'm experimenting with GSocketService. So far I can only seem to accept connections but then they are immediately closed. From the docs I can't figure out what step I am missing. I'm hoping someone can shed some light on this for me. When running the following: # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. # telnet localhost 4000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Output from the server: # ./server New Connection from 127.0.0.1:36962 New Connection from 127.0.0.1:36963 New Connection from 127.0.0.1:36965 Current code: /* * server.c * * Created on: Mar 10, 2010 * Author: mark */ #include <glib.h> #include <gio/gio.h> gchar *buffer; gboolean network_read(GIOChannel *source, GIOCondition cond, gpointer data) { GString *s = g_string_new(NULL); GError *error; GIOStatus ret = g_io_channel_read_line_string(source, s, NULL, &error); if (ret == G_IO_STATUS_ERROR) g_error ("Error reading: %s\n", error->message); else g_print("Got: %s\n", s->str); } gboolean new_connection(GSocketService *service, GSocketConnection *connection, GObject *source_object, gpointer user_data) { GSocketAddress *sockaddr = g_socket_connection_get_remote_address(connection, NULL); GInetAddress *addr = g_inet_socket_address_get_address(G_INET_SOCKET_ADDRESS(sockaddr)); guint16 port = g_inet_socket_address_get_port(G_INET_SOCKET_ADDRESS(sockaddr)); g_print("New Connection from %s:%d\n", g_inet_address_to_string(addr), port); GSocket *socket = g_socket_connection_get_socket(connection); gint fd = g_socket_get_fd(socket); GIOChannel *channel = g_io_channel_unix_new(fd); g_io_add_watch(channel, G_IO_IN, (GIOFunc) network_read, NULL); return TRUE; } int main(int argc, char **argv) { g_type_init(); GSocketService *service = g_socket_service_new(); GInetAddress *address = g_inet_address_new_from_string("127.0.0.1"); GSocketAddress *socket_address = g_inet_socket_address_new(address, 4000); g_socket_listener_add_address(G_SOCKET_LISTENER(service), socket_address, G_SOCKET_TYPE_STREAM, G_SOCKET_PROTOCOL_TCP, NULL, NULL, NULL); g_object_unref(socket_address); g_object_unref(address); g_socket_service_start(service); g_signal_connect(service, "incoming", G_CALLBACK(new_connection), NULL); GMainLoop *loop = g_main_loop_new(NULL, FALSE); g_main_loop_run(loop); }

    Read the article

  • Unity framework - creating & disposing Entity Framework datacontexts at the appropriate time

    - by TobyEvans
    Hi there, With some kindly help from StackOverflow, I've got Unity Framework to create my chained dependencies, including an Entity Framework datacontext object: using (IUnityContainer container = new UnityContainer()) { container.RegisterType<IMeterView, Meter>(); container.RegisterType<IUnitOfWork, CommunergySQLiteEntities>(new ContainerControlledLifetimeManager()); container.RegisterType<IRepositoryFactory, SQLiteRepositoryFactory>(); container.RegisterType<IRepositoryFactory, WCFRepositoryFactory>("Uploader"); container.Configure<InjectedMembers>() .ConfigureInjectionFor<CommunergySQLiteEntities>( new InjectionConstructor(connectionString)); MeterPresenter meterPresenter = container.Resolve<MeterPresenter>(); this works really well in creating my Presenter object and displaying the related view, I'm really pleased. However, the problem I'm running into now is over the timing of the creation and disposal of the Entity Framework object (and I suspect this will go for any IDisposable object). Using Unity like this, the SQL EF object "CommunergySQLiteEntities" is created straight away, as I've added it to the constructor of the MeterPresenter public MeterPresenter(IMeterView view, IUnitOfWork unitOfWork, IRepositoryFactory cacheRepository) { this.mView = view; this.unitOfWork = unitOfWork; this.cacheRepository = cacheRepository; this.Initialize(); } I felt a bit uneasy about this at the time, as I don't want to be holding open a database connection, but I couldn't see any other way using the Unity dependency injection. Sure enough, when I actually try to use the datacontext, I get this error: ((System.Data.Objects.ObjectContext)(unitOfWork)).Connection '((System.Data.Objects.ObjectContext)(unitOfWork)).Connection' threw an exception of type 'System.ObjectDisposedException' System.Data.Common.DbConnection {System.ObjectDisposedException} My understanding of the principle of IoC is that you set up all your dependencies at the top, resolve your object and away you go. However, in this case, some of the child objects, eg the datacontext, don't need to be initialised at the time the parent Presenter object is created (as you would by passing them in the constructor), but the Presenter does need to know about what type to use for IUnitOfWork when it wants to talk to the database. Ideally, I want something like this inside my resolved Presenter: using(IUnitOfWork unitOfWork = new NewInstanceInjectedUnitOfWorkType()) { //do unitOfWork stuff } so the Presenter knows what IUnitOfWork implementation to use to create and dispose of straight away, preferably from the original RegisterType call. Do I have to put another Unity container inside my Presenter, at the risk of creating a new dependency? This is probably really obvious to a IoC guru, but I'd really appreciate a pointer in the right direction thanks Toby

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >