Search Results

Search found 12623 results on 505 pages for 'portfolio management'.

Page 479/505 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • Error after installing Django (supposed PATH or PYTHONPATH "error")

    - by illuminated
    Hi all, I guess this is a PATH/PYTHONPATH error, but my attempts failed so far to make django working. System is Ubuntu 10.04, 64bit: mx:~/webapps$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04 LTS" Python version: 2.6.5: @mx:~/webapps$ python -V Python 2.6.5 When I run django-admin.py, the following happens: mx:~/webapps$ django-admin.py Traceback (most recent call last): File "/usr/local/bin/django-admin.py", line 2, in <module> from django.core import management ImportError: No module named django.core Similar when I import django in python shell: mx:~/webapps$ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import django Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named django >>> quit() More details: mx:~/webapps$ python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()" /usr/lib/python2.6/dist-packages Within python shell: Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print sys.path ['', '/usr/lib/python2.6/dist-packages/django', '/usr/local/lib/python2.6/dist-packages/django/bin', '/usr/local/lib/python2.6/dist-packages/django', '/home/petra/webapps', '/usr/lib/python2.6', '/usr/lib/python2.6/plat-linux2', '/usr/lib/python2.6/lib-tk', '/usr/lib/python2.6/lib-old', '/usr/lib/python2.6/lib-dynload', '/usr/lib/python2.6/dist-packages', '/usr/lib/python2.6/dist-packages/PIL', '/usr/lib/pymodules/python2.6'] django-admin.py can be found here: mx:~/webapps$ locate django-admin.py ~/install/sources/Django-1.2.1/build/lib.linux-i686-2.6/django/bin/django-admin.py ~/install/sources/Django-1.2.1/build/scripts-2.6/django-admin.py ~/install/sources/Django-1.2.1/django/bin/django-admin.py /usr/local/bin/django-admin.py /usr/local/lib/python2.6/dist-packages/django/bin/django-admin.py /usr/local/lib/python2.6/dist-packages/django/bin/django-admin.pyc and in the end this doesn't help: export PYTHONPATH="/usr/lib/python2.6/dist-packages/django:$PYTHONPATH" nor this: export PYTHONPATH="/usr/local/lib/python2.6/dist-packages/django:$PYTHONPATH" How to solve this !? Thanks all in advance! :)

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • How to fine tune a Membership Provider?

    - by Venemo
    After all the answers to my last question about fine-tuning turned out to be more useful than I expected, I thought that I would ask another similar Question about the MembershipProviders as well. Okay, so firstly, to clarify: I know what a Membership, Role, and Profile provider is, how to implement my own, and how to configure them, and most of the things about them. Implementing a role and profile provider is pretty straightforward, because they only require simple CRUD most of the time. (A single line of LINQ is enough for about half of the RoleProvider's methods.) However, the Membership provider is a differend beast. Many of you may realize that it violates the SR (Single Responsibility) principle, because it has to do EVERYTHING related to user management. While this leaves a lot of room for customizations, it has its downsides as well. There is no information on the Internet about what their EXACT expected behaviour is, such as when should they throw exceptions or simply return null, and stuff like that. I use this sample implementation for reference, but it also contains several contradictions. For example, it uses its own ValidateUser method for checking for credentials in the ChangePassword method. But the ValidateUser also updates the user's LastLoginDate to the current date. So, does the framework expect that I set it in my own provider as well, or is it simply a mistake in the sample? The other is: the ChangePassword method throws an exception every time when validating the new password, but CreateUser doesn't ever throw an exception, it simply returns false. And last, but not least: it counts the invalid password attempts of the user and locks them if it passes a threshold. While this is good, but it requires manual action to unlock the users. Is it a problem if my provider automatically unlocks the user after a certain amount of time? (EDIT) I almost forgot: the CreateUser method in the sample inserts the ID from the method parameter. I actually think this is bad practice, because I use inters with auto incement as IDs, so inserting them from some method parameter is not an option. Should I just ignore the parameter, or require that its value is null and throw an exception if it isn't? All in all, does ASP.NET have any assumptions about the behaviour of a MembershipProvider? Is there any documentation which describes when should I throw an exception or just return null? I also tried to find a set of generic unit tests which would provide some guidance about the expected behaviour, but no luck, I found plenty of articles about "Unit testing is good", and "How to unit test a MembershipProvider", but not one where there would be any actual tests. Thanks in advance for everyone!

    Read the article

  • Delay keyboard input help

    - by Stradigos
    I'm so close! I'm using the XNA Game State Management example found here and trying to modify how it handles input so I can delay the key/create an input buffer. In GameplayScreen.cs I've declared a double called elapsedTime and set it equal to 0. In the HandleInput method I've changed the Key.Right button press to: if (keyboardState.IsKeyDown(Keys.Left)) movement.X -= 50; if (keyboardState.IsKeyDown(Keys.Right)) { elapsedTime -= gameTime.ElapsedGameTime.TotalMilliseconds; if (elapsedTime <= 0) { movement.X += 50; elapsedTime = 10; } } else { elapsedTime = 0; } The pseudo code: If the right arrow key is not pressed set elapsedTime to 0. If it is pressed, the elapsedTime equals itself minus the milliseconds since the last frame. If the difference then equals 0 or less, move the object 50, and then set the elapsedTime to 10 (the delay). If the key is being held down elapsedTime should never be set to 0 via the else. Instead, after elapsedTime is set to 10 after a successful check, the elapsedTime should get lower and lower because it's being subtracted by the TotalMilliseconds. When that reaches 0, it successfully passes the check again and moves the object once more. The problem is, it moves the object once per press but doesn't work if you hold it down. Can anyone offer any sort of tip/example/bit of knowledge towards this? Thanks in advance, it's been driving me nuts. In theory I thought this would for sure work. CLARIFICATION Think of a grid when your thinking about how I want the block to move. Instead of just fluidly moving across the screen, it's moving by it's width (sorta jumping) to the next position. If I hold down the key, it races across the screen. I want to slow this whole process down so that holding the key creates an X millisecond delay between it 'jumping'/moving by it's width. EDIT: Turns out gameTime.ElapsedGameTime.TotalMilliseconds is returning 0... all of the time. I have no idea why.

    Read the article

  • Tracking upstream svn changes with git-svn and github?

    - by Joseph Turian
    How do I track upstream SVN changes using git-svn and github? I used git-svn to convert an SVN repo to git on github: $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master I then made a few changes in my git repo, committed, and pushed to github. Now, I am on a new machine. I want to take upstream SVN changes, merge them with my github repo, and push them to my github repo. This documentation says: "If you ever lose your local copy, just run the import again with the same settings, and you’ll get another working directory with all the necessary SVN metainfo." So I did the following. But none of the commands work as desired. How do I track upstream SVN changes using git-svn and github? What am I doing wrong? $ git svn clone -s http://svn.osqa.net/svnroot/osqa/ osqa $ cd osqa $ git remote add origin [email protected]:turian/osqa.git $ git push origin master To [email protected]:turian/osqa.git ! [rejected] master -> master (non-fast forward) error: failed to push some refs to '[email protected]:turian/osqa.git' $ git pull remote: Counting objects: 21, done. remote: Compressing objects: 100% (17/17), done. remote: Total 17 (delta 7), reused 9 (delta 0) Unpacking objects: 100% (17/17), done. From [email protected]:turian/osqa * [new branch] master -> origin/master From [email protected]:turian/osqa * [new tag] master -> master You asked me to pull without telling me which branch you want to merge with, and 'branch.master.merge' in your configuration file does not tell me either. Please name which branch you want to merge on the command line and try again (e.g. 'git pull <repository> <refspec>'). See git-pull(1) for details on the refspec. ... $ /usr//lib/git-core/git-svn rebase warning: refname 'master' is ambiguous. First, rewinding head to replay your work on top of it... Applying: Added forum/management/commands/dumpsettings.py error: Ref refs/heads/master is at 6acd747f95aef6d9bce37f86798a32c14e04b82e but expected a7109d94d813b20c230a029ecd67801e6067a452 fatal: Cannot lock the ref 'refs/heads/master'. Could not move back to refs/heads/master rebase refs/remotes/trunk: command returned error: 1

    Read the article

  • Unable to execute stored Procedure using Java and JDBC on SQL server

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Problem serializing complex data using WCF

    - by Gustavo Paulillo
    Scenario: WCF client app, calling a web-service (JAVA) operation, wich requires a complex object as parameter. Already got the metadata. Problem: The operation has some required fields. One of them is a enum. In the SOAP sent, isnt the field above (generated metadata) - Im using WCF diagnostics and Windows Service Trace Viewer: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Consult-Filter", Namespace="http://webserviceX.org/")] public partial class ConsFilter : object, System.ComponentModel.INotifyPropertyChanged { private PersonType customerTypeField; Property: [System.Xml.Serialization.XmlElementAttribute("customer-type", Form=System.Xml.Schema.XmlSchemaForm.Unqualified, Order=1)] public PersonType customerType { get { return this.customerTypeField; } set { this.customerTypeField = value; this.RaisePropertyChanged("customerType"); } } The enum: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Person-Type", Namespace="http://webserviceX.org/")] public enum PersonType { /// <remarks/> F, /// <remarks/> J, } The trace log: <MessageLogTraceRecord> <HttpRequest xmlns="http://schemas.microsoft.com/2004/06/ServiceModel/Management/MessageTrace"> <Method>POST</Method> <QueryString></QueryString> <WebHeaders> <VsDebuggerCausalityData>data</VsDebuggerCausalityData> </WebHeaders> </HttpRequest> <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Header> <Action s:mustUnderstand="1" xmlns="http://schemas.microsoft.com/ws/2005/05/addressing/none"></Action> <ActivityId CorrelationId="correlationId" xmlns="http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics">activityId</ActivityId> </s:Header> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <filter xmlns="http://webserviceX.org/"> <product-code xmlns="">116</product-code> <customer-doc xmlns="">777777777</customer-doc> </filter> </s:Body> </s:Envelope> </MessageLogTraceRecord>

    Read the article

  • Rendering javascript at the server side level. A good or bad idea?

    - by davidhong
    I want to make it clear first: This isn't a question in relation to server-side Javascript or running Javascript server side. This is a question regarding rendering of Javascript code (which will be executed on the client-side) from server-side code. Having said that, take a look at below ASP.net code for example: hlRemoveCategory.Attributes.Add("onclick", "return confirm('Are you sure you want to delete this?');") This is prescribing the client-side onclick event on the server-side. As oppose to: $('a[rel=remove]').bind('click', function(event) { return confirm('Are you sure you want to delete this?'); } Now the question I want to ask is: What is the benefit of rendering javascript from the server-side code? Or the vice-versa? I personally prefer the second way of hooking up client-side UI/behaviour to HTML elements for the following reasons: Server-side does what ever it needs to already, including data-validation, event delegation and etc; and What server-side sees as an event is not necessarily the same process on the client-side. i.e., there are plenty more events on client-side (just look at custom events); and What happens on client-side and on server-side, during an event, could be completely irrelevant and decoupled; and What ever happens on client-side happens on client-side, there is no need for the server to know. Server should process and run what is given to them, how the process comes to life is not really up to them to decide in the event of the client-side events; and so and so forth. These are my thoughts obviously. I want to know what others think and if there has been any discussions on this topic. Topics branching from this argument can reach: Code management: is it easier to render everything from server-side? Separation of concern: is it easier if client-side logic is separated to server-side logic? Efficiency: which is more efficient both in terms of coding and running? At the end of the day, I am trying to move my team to go towards the second approach. There are lot of old guys in this team who are afraid of this change. I just wish to convince them with the right facts and stats. Let me know your thoughts.

    Read the article

  • Best Practice: Legitimate Cross-Site Scripting

    - by Ryan
    While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary. I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple barebones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong. Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought "iFrame" - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content, so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame, I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page. In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript. A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors. My solution was as follows: My script received a GET parameter ("page") and then looked for the file ({$page}.php), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters ("\n") followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template. While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain?

    Read the article

  • Unable to execute stored Procedure using Java and JDBC

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • Sharepoint web part stops working because of Resources.en-US.resx file

    - by Eric C
    I've been developing a Sharepoint web part, which had been working fine upon deployment. The web part has been developed with WSP Builder, packaged up and then deployed via stsadm. The web part has been deployed tens, if not a hundred times to the dev box with no problems. Now, the web part throws an error which breaks the page it's on: Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.HandleException(Exception ex) +62 NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.OnLoad(EventArgs e) +214 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 When looking through my Sharepoint logs, I find these errors repeated over and over which correspond to the time the web part was attempted to be loaded: 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 72kg High (#2: Cannot open "Resources.en-US.resx": no such file or folder.) 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e26 Medium Failed to open the language resource for Fea367b94a9-4a15-42ba-b4a2-32420363e018 keyfile Resources. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "XomlUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'XomlUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "RulesUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'RulesUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". I've retracted the web part manually through Solution Management, retracted through stsadm, checked for the existence of the resource file, which is nowhere to be found. I'm pretty much at a loss to why this happened or how to resolve it.

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • PHP Processes reaching limit along with FastCGI timeouts

    - by Constant M
    I have a problem similar to http://stackoverflow.com/questions/1168384/how-to-troubleshoot-php-processes, but with a twist. We have a couple of managed servers that recently migrated to FastCGI. Since then we've been having problems. We have a content management system at a central place that we manage all our sites with. All the sites run on the same managed server. So basically its one PHP script that generates pages on the same server. The catch comes in where it works perfectly on some, and gives FCGI timeout errors on others. At the same time our support is saying that the PHP processes heap up to their limit and then causes Apache to stop working. I'm convinced this is a fault on their side, but would like to get my side clean too. Anyway have any suggestions where I can start looking for what's going wrong? I know it's a really broad question, but any help or pointers would be great. Thanks Just to add to that, here's a PS printout from support: happyh 13853 21556 0 01:04 ? 00:00:00 /usr/bin/php5-cgi happyh 13869 13853 0 01:04 ? 00:00:00 /usr/bin/php5-cgi mamedc 13914 21556 0 05:14 ? 00:00:00 /usr/bin/php5-cgi wealthf 13947 21556 0 01:21 ? 00:00:00 /usr/bin/php5-cgi wealthf 13961 13947 0 01:21 ? 00:00:00 /usr/bin/php5-cgi mamedc 14032 13914 0 05:14 ? 00:00:00 /usr/bin/php5-cgi lookgrt 14157 21556 0 04:47 ? 00:00:00 /usr/bin/php5-cgi lookgrt 14178 14157 0 04:47 ? 00:00:00 /usr/bin/php5-cgi wolfie 14262 21556 0 01:08 ? 00:00:00 /usr/bin/php5-cgi wolfie 14276 14262 0 01:08 ? 00:00:00 /usr/bin/php5-cgi yaukrl 14352 21556 0 01:21 ? 00:00:00 /usr/bin/php5-cgi yaukrl 14361 14352 0 01:21 ? 00:00:00 /usr/bin/php5-cgi itpays2 14538 21556 0 01:33 ? 00:00:00 /usr/bin/php5-cgi itpays2 14547 14538 0 01:33 ? 00:00:00 /usr/bin/php5-cgi brichmbx 14732 21556 0 04:47 ? 00:00:00 /usr/bin/php5-cgi brichmbx 14803 14732 0 04:47 ? 00:00:00 /usr/bin/php5-cgi greatl 14969 21556 0 01:00 ? 00:00:00 /usr/bin/php5-cgi

    Read the article

  • Execute SQL SP in Excel VBA

    - by TheOCD
    HI I am having problem with getting all the columns back when i execute following code in excel vba. I only get 6 out of 23 columns back. Connection, command etc works fine (i can see exec command in the SQL Profiler), data headers are created for all 23 columns but i only get data for 6 column. Side Note: it's not prod level code, have missed out error handling on purpose, sp works fine in SQL management studio, ASP.Net, C# win form app, it is for Excel 2003 connecting to SQL 2008. Can someone help me troubleshoot it? Dim connection As ADODB.connection Dim recordset As ADODB.recordset Dim command As ADODB.command Dim strProcName As String 'Stored Procedure name Dim strConn As String ' connection string. Dim selectedVal As String 'Set ADODB requirements Set connection = New ADODB.connection Set recordset = New ADODB.recordset Set command = New ADODB.command If Workbooks("Book2.xls").MultiUserEditing = True Then MsgBox "You do not have Exclusive access to the workbook at this time." & _ vbNewLine & "Please have all other users close the workbook and then try again.", vbOKOnly + vbExclamation Exit Sub Else On Error Resume Next ActiveWorkbook.ExclusiveAccess 'On Error GoTo No_Bugs End If 'set the active sheet Set oSht = Workbooks("Book2.xls").Sheets(1) 'get the connection string, if empty just exit strConn = ConnectionString() If strConn = "" Then Exit Sub End If ' selected value, if <NOTHING> just exit selectedVal = selectedValue() If selectedVal = "<NOTHING>" Then Exit Sub End If If Not oSht Is Nothing Then 'Open database connection connection.ConnectionString = strConn connection.Open ' set command stuff. command.ActiveConnection = connection command.CommandText = "GetAlbumByName" command.CommandType = adCmdStoredProc command.Parameters.Refresh command.Parameters(1).Value = selectedVal 'Execute stored procedure and return to a recordset Set recordset = command.Execute() If recordset.BOF = False And recordset.EOF = False Then Sheets("Sheet2").[A1].CopyFromRecordset recordset ' Create headers and copy data With Sheets("Sheet2") For Column = 0 To recordset.Fields.Count - 1 .Cells(1, Column + 1).Value = recordset.Fields(Column).Name Next .Range(.Cells(1, 1), .Cells(1, recordset.Fields.Count)).Font.Bold = True .Cells(2, 1).CopyFromRecordset recordset End With Else MsgBox "b4 BOF or after EOF.", vbOKOnly + vbExclamation End If 'Close database connection and clean up If CBool(recordset.State And adStateOpen) = True Then recordset.Close Set recordset = Nothing If CBool(connection.State And adStateOpen) = True Then connection.Close Set connection = Nothing Else MsgBox "oSheet2 is Nothing.", vbOKOnly + vbExclamation End If

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Buy or Build for web deployment?

    - by Cannonade
    I have been evaluating the wide range of installation and web deployment solutions available for Windows applications. I will just clarify here (without too much detail, these tools have been covered in other questions) my understanding of the options: NSIS - Free tool that generates setup executables. Small binary. Specialized, sometimes obtuse, scripting language. Inno Setup - Free tools for setup executables. Various binary compression schemes. Pascal scripting engine. WIX - Free toolset to generate MSI binaries. XML definitions language. WIX ClickThrough - Additional tools for packaging, web download and auto update detection (now part of WIX core). InstallShield - Commercial development environment for installation packaging. Generates MSI binaries. C-like InstallScript language. Wise - Commercial development environment for installation packaging. Generates MSI binaries. ClickOnce - Visual Studio supported framework for publishing applications to a webserver, with automatic detection of updates. No support for custom installation requirements (INI files, registry etc ...). Packages setup as an MSI binary. Install Aware - Commercial development environment for installation. Generates MSI binaries. Automatic Update framwork (Web Update). If I have missed any, please let me know. And found some useful discussions of these technologies on StackOverflow: Best Simple Install System Best choice for Windows installers Alternatives to ClickOnce I have worked with a few of these solutions, as well as a handful of proprietary internal installation solutions. They are mostly concerned with packing installations and providing a framework for developers to access the run time environment. With the growing requirement for web deployment and automatic software updates, I expected to find more of a consensus among developers on a framework for web delivery of software and subsequent updates, I haven't really found that consensus. There are certainly solutions available (ClickOnce, ClickThrough, InstallShield Update Service), but they each have considerable limitations (please correct me if I mis-represent any of these). I would be interested in a framework that provided some of the following: Third party hosting/management of updates. Access to client environment (INI files, registry, etc..). User registration/activation. Feedback/Error reporting This is leaving me with the strong impression that the best way to approach the web deployment problem is through a custom built proprietary solution (possibly leveraging existing installer packaging). I have seen this sort of solution work well for a number of successful applications: FileZilla - HTTP request to update.filezilla-project.org to check for updates, downloads an NSIS binary (I think) and then shuts down to run the install.

    Read the article

  • Impersonation on Windows 2000 to Windows XP Leaves Connections Open

    - by Tallek
    I'm running on a Windows 2000 Pro SP4 box (off domain) and trying to impersonate a local user on a Windows XP box (on domain). I'm using code very similar to the WindowsImpersonationContextFacade in the question posted here: http://stackoverflow.com/questions/879704/how-can-i-temporarily-impersonate-a-user-to-open-a-file. I am using impersonation to remotely start and stop windows services as well as access network shares (for some automated integration tests). To get this working, i had to use LOGON32_PROVIDER_DEFAULT and LOGON32_LOGON_NEW_CREDENTIALS when calling LogonUser. Everything worked beautifully ( Windows XP on domain to Windows XP on domain, Windows XP on domain to Windows Server 2003 off domain, and even Windows XP on domain to Windows 2000 off domain). The one issue was running on Windows 2000 Pro SP4 off the domain and trying to impersonate a local user on a Windows XP box running on the domain. To get the Windows 2000 piece working, i had to use LOGON32_PROVIDER_WINNT50 and LOGON32_LOGON_NEW_CREDENTIALS when calling LogonUser. This seemed to get me 95% of the way there, i could now impersonate the local user on the XP box and start/stop services as well as access a network share using the impersonated credentials. I'm running in to one problem though, calling Undo impersonation and closing the token handle seems to leave the connection to the remote box open. After about 10 or so impersonation calls, further impersonation attempts will fail with an error saying something about too many connections are currently open. If i look at the Computer Management - System Tools - Shared Folders - Sessions on my remote Windows XP box, i can see about 10 sessions open to the Windows 2000 box. I can manually close these (i think they may eventually close themselves, but not very quickly) and then impersonation begins working again few more times. This open session issue doesn't seem to be a problem in any of my other test scenarios, just when running locally on a Windows 2000 box. Any ideas? Edit 1: After some more testing and trying out many different things, this seems to be an issue with open sessions not being reused. On Windows 2000 only, every call to LogonUser to get a token and then using that token to impersonate seems to result in a new session being created. I'm guessing Windows XP & Windows Server 2003 are reusing open sessions since i don't seem to be having any issues with them. If I call LogonUser once, then cache the token, I seem to be able to make as many calls to impersonate as I need using the cached token without running in to the "too many connections" issue. This seems like an ugly work around though since i can't call CloseHandle() on my token every time i perform impersonation. Anybody have any thoughts or ideas, or am i stuck with this ugly hack? Thanks

    Read the article

  • How do I integrate a new MVC C# Project with an existing Web Forms VB.NET Web Application Project?

    - by Jordan Rieger
    We have a corporate website with a large amount of dynamic business application pages (e.g. Shopping Cart, Helpdesk, Product/Service management, Reporting, etc.) The site was built as an ASP.Net Web Application Project (WAP). Our systems have evolved over the years to use .NET 4.5 and various custom business logic DLLs (written in a mix of C# and VB.NET). However, the site itself is still using VB.NET Web Forms. We now have done a few side projects in MVC 4 using Razor/C#, and we want to use this framework for new pages on the main corporate site going forward. What would be the easiest way to achieve this? I found this nice list of steps to integrate MVC 4 into an existing Web Forms app. The problem is that because our existing app is a VB.NET WAP, it compiles into a single DLL, and .NET allows only one language per DLL. The site is way too big for us to contemplate converting it to C# all at once (yes, I've looked at the conversion tools, and they're good, but even 99% accuracy would leave us a huge amount of cleanup work.) I thought about converting the existing WAP into a Web Site Project (WSP) which does allow mixing languages and then following the steps above, but after a few pages of Google results, I couldn't find any steps for converting a WAP to WSP. (Plenty of sites offer the reverse steps: converting a WSP to a WAP.) Another idea I had was to create a completely separate MVC project, and then somehow squish them together into the same folder structure, where they would share the bin folder but compile to separate DLL's. I have no idea if this is possible, because certain files would collide (e.g. Global.asax, web.config, etc.) Finally, I can imagine a compromise solution where we keep all the MVC stuff in its own separate application under a subfolder of the main solution. We already use our own custom session state solution, so it wouldn't be difficult to pass data between the old site to the new pages. Which of the ideas above do you think makes the most sense for us? Is there another solution that I'm missing?

    Read the article

  • php MySQL snytax error

    - by Jacksta
    my scrip is supposed to look up contacts in a table and present thm on the screen to then be edited. however this is not this case. I am getting the error Parse error: syntax error, unexpected $end in /home/admin/domains/domain.com.au/public_html/pick_modcontact.php on line 50 NOTE: this is the last line in this script. <? session_start(); if ($_SESSION[valid] != "yes") { header( "Location: contact_menu.php"); exit; } $db_name = "testDB"; $table_name = "my_contacts"; $connection = @mysql_connect("localhost", "user", "pass") or die(mysql_error()); $db = @mysql_select_db($db_name, $connection) or die(mysql_error()); $sql = "SELECT id, f_name, l_name FROM $table_name ORDER BY f_name"; $result = @mysql_query($sql, $connection) or die(mysql_error()); $num = @mysql_num_rows($result); if ($num < 1) { $display_block = "<p><em>Sorry No Results!</em></p>"; } else { while ($row = mysql_fetch_array($result)) { $id = $row['id']; $f_name = $row['f_name']; $l_name = $row['l_name']; $option_block .= "<option value\"$id\">$l_name, $f_name</option>"; } $display_block = "<form method=\"POST\" action=\"show_modcontact.php\"> <p><strong>Contact:</strong> <select name=\"id\">$option_block</select> <input type=\"submit\" name=\"submit\" value=\"Select This Contact\"></p> </form>"; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Modify A Contact</title> </head> <body> <h1>My Contact Management System</h1> <h2><em>Modify a Contact</em></h2> <p>Select a contact from the list below, to modify the contact's record.</p> <? echo "$display_block"; ?> <br> <p><a href="contact_menu.php">Return to Main Menu</a></p> </body> </html>

    Read the article

  • How do I recover from an unchecked exception?

    - by erickson
    Unchecked exceptions are alright if you want to handle every failure the same way, for example by logging it and skipping to the next request, displaying a message to the user and handling the next event, etc. If this is my use case, all I have to do is catch some general exception type at a high level in my system, and handle everything the same way. But I want to recover from specific problems, and I'm not sure the best way to approach it with unchecked exceptions. Here is a concrete example. Suppose I have a web application, built using Struts2 and Hibernate. If an exception bubbles up to my "action", I log it, and display a pretty apology to the user. But one of the functions of my web application is creating new user accounts, that require a unique user name. If a user picks a name that already exists, Hibernate throws an org.hibernate.exception.ConstraintViolationException (an unchecked exception) down in the guts of my system. I'd really like to recover from this particular problem by asking the user to choose another user name, rather than giving them the same "we logged your problem but for now you're hosed" message. Here are a few points to consider: There a lot of people creating accounts simultaneously. I don't want to lock the whole user table between a "SELECT" to see if the name exists and an "INSERT" if it doesn't. In the case of relational databases, there might be some tricks to work around this, but what I'm really interested in is the general case where pre-checking for an exception won't work because of a fundamental race condition. Same thing could apply to looking for a file on the file system, etc. Given my CTO's propensity for drive-by management induced by reading technology columns in "Inc.", I need a layer of indirection around the persistence mechanism so that I can throw out Hibernate and use Kodo, or whatever, without changing anything except the lowest layer of persistence code. As a matter of fact, there are several such layers of abstraction in my system. How can I prevent them from leaking in spite of unchecked exceptions? One of the declaimed weaknesses of checked exceptions is having to "handle" them in every call on the stack—either by declaring that a calling method throws them, or by catching them and handling them. Handling them often means wrapping them in another checked exception of a type appropriate to the level of abstraction. So, for example, in checked-exception land, a file-system–based implementation of my UserRegistry might catch IOException, while a database implementation would catch SQLException, but both would throw a UserNotFoundException that hides the underlying implementation. How do I take advantage of unchecked exceptions, sparing myself of the burden of this wrapping at each layer, without leaking implementation details?

    Read the article

  • Classes to Entities; Like-class inheritence problems

    - by Stacey
    Beyond work, some friends and I are trying to build a game of sorts; The way we structure some of it works pretty well for a normal object oriented approach, but as most developers will attest this does not always translate itself well into a database persistent approach. This is not the absolute layout of what we have, it is just a sample model given for sake of representation. The whole project is being done in C# 4.0, and we have every intention of using Entity Framework 4.0 (unless Fluent nHibernate can really offer us something we outright cannot do in EF). One of the problems we keep running across is inheriting things in database models. Using the Entity Framework designer, I can draw the same code I have below; but I'm sure it is pretty obvious that it doesn't work like it is expected to. To clarify a little bit; 'Items' have bonuses, which can be of anything. Therefore, every part of the game must derive from something similar so that no matter what is 'changed' it is all at a basic enough level to be hooked into. Sounds fairly simple and straightforward, right? So then, we inherit everything that pertains to the game from 'Unit'. Weights, Measures, Random (think like dice, maybe?), and there will be other such entities. Some of them are similar, but in code they will each react differently. We're having a really big problem with abstracting this kind of thing into a database model. Without 'Enum' support, it is proving difficult to translate into multiple tables that still share a common listing. One solution we've depicted is to use a 'key ring' type approach, where everything that attaches to a character is stored on a 'Ring' with a 'Key', where each Key has a Value that represents a type. This works functionally but we've discovered it becomes very sluggish and performs poorly. We also dislike this approach because it begins to feel as if everything is 'dumped' into one class; which makes management and logical structure difficult to adhere to. I was hoping someone else might have some ideas on what I could do with this problem. It's really driving me up the wall; To summarize; the goal is to build a type (Unit) that can be used as a base type (Table per Type) for generic reference across a relatively global scope, without having to dump everything into a single collection. I can use an Interface to determine actual behavior so that isn't too big of an issue. This is 'roughly' the same idea expressed in the Entity Framework.

    Read the article

  • WLI domain with 3 servers - issues on JPD process startup

    - by XpiritO
    Hi there. I'm currently working on a clustered WLI environment which comprehends 3 servers: 1 admin server ("AdminServer") and 2 managed servers ("mn1" and "mn2") grouped as a cluster, as follows: Architecture diagram: http://img72.imageshack.us/img72/4112/clusterdiagram.jpg I've developed a JPD process to execute some scheduled tasks, invoked using a Message Broker. I've deployed this project into a single-server WLI domain (with AdminServer only) and it works as expected: the JPD process is invoked (I've configured a Timer Event Generator instance to start it up). Message broker: http://img532.imageshack.us/img532/1443/wlimessagebroker.jpg Timer event generator: http://img408.imageshack.us/img408/7358/wlitimereventgenerator.jpg In order to achieve fail-over and load-balancing capabilities, I'm currently trying to deploy this JPD process into this clustered WLI environment. Although, I'm having some issues with this, as I cannot get it to work properly, even if it still works. Here is a screenshot of the "WLI Process Instance Monitor" (with AdminServer and mn1 instances up and running): http://img710.imageshack.us/img710/8477/wliprocessinstancemonit.jpg According to this screen the process seems to be running, as it shows in this instance monitor screen. However, I don't see any output coming out neither at AdminServer console or mn1 console. In single-server domain it was visible output from JPD process "timeout" callback method, wich implementation is shown below: @com.bea.wli.control.broker.MessageBroker.StaticSubscription(xquery = "", filterValueMatch = "", channelName = "/SamplePrefix/Samples/SampleStringChannel", messageBody = "{x0}") public void subscription(java.lang.String x0) { String toReturn=""; try { Context myCtx = new InitialContext(); MBeanHome mbeanHome = (MBeanHome)myCtx.lookup("weblogic.management.home.localhome"); toReturn=mbeanHome.getMBeanServer().getServerName(); System.out.println("**** executed at **** " + System.currentTimeMillis() + " by: " + toReturn); } catch (Exception e) { System.out.println("Exception!"); e.printStackTrace(); } } (...) @org.apache.beehive.controls.api.events.EventHandler(field = "myT", eventSet = com.bea.control.WliTimerControl.Callback.class, eventName = "onTimeout") public void myT_onTimeout(long time, java.io.Serializable data) { // #START: CODE GENERATED - PROTECTED SECTION - you can safely add code above this comment in this method. #// // input transform System.out.println("**** published at **** " + System.currentTimeMillis()); publishControl.publish("aaaa"); // parameter assignment // #END : CODE GENERATED - PROTECTED SECTION - you can safely add code below this comment in this method. #// } and here is the output visible at "AdminServer" console in single-server domain testing: **** published at **** 1273238090713 **** executed at **** 1273238132123 by: AdminServer **** published at **** 1273238152462 **** executed at **** 1273238152562 by: AdminServer (...) What may be wrong with my clustered configuration? Am I missing something to accomplish clustered deployment? Thanks in advance for your help.

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >