Search Results

Search found 4946 results on 198 pages for 'proper benchmarking'.

Page 94/198 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • SQL SERVER – ERROR – FIX – Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use

    - by pinaldave
    I often go to do various seminars and presentations at various organizations. During presentations I often create and drop various databases for demonstrations purpose. Recently in one of the presentations, I tried to remove my recently created database, I got following error. Msg 3702, Level 16, State 3, Line 1 Cannot drop database “MyDBName” because it is currently in use. The reason was very simple as my database was in use by another session or window. I had option that I should go and find open session and close it right away; later followed by dropping the database. As I was in rush I quickly wrote down following code and I was able to successfully drop the database. USE MASTER GO ALTER DATABASE MyDBName SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE MyDBName GO Please note that I am doing all this on my demonstrations, do not run above code on production without proper approvals and supervisions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How should I share variables between instances/classes?

    - by tesselode
    I'm making a game using LOVE, so everything is programmed in Lua. I've been experimenting with using classes and object orientation recently. I've found out that a nice system to use is having most of the game's code in different classes, and having a table of instances with all of the instances of any class in it. This way, I can go through every instance of every class and update and draw it by calling the same function. There is a problem, though. Let's say I have an instance of a player with variables for health and recharge time of a weapon. I also have a master instance which is responsible for drawing the HUD. How can I tell the master instance what the player's health is? Bad solutions: Assuming that the player instance will always have the same position in the table - that can be easily changed. Using global variables. Global variables are evil. Have the master instance outside of the instances table, and have the player set variables inside the master instance, which it then uses for HUD drawing. This is really bad because now I have to make a duplicate of every variable the master instance needs. What is the proper, standard way of sharing variables between instances? Do I need to change the way I keep track of instances?

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • I have 5 days of vouchers for MS training… help me choose? [closed]

    - by Shyatic
    I'm a Microsoft centric guy (systems engineering side) and I already know the syntax of VB, have done VBScript pretty extensively and Excel VBA stuff as well. I want to make the leap into proper programming, probably with C# because it teaches me syntax I can use for Java if I want to go that route at some point. Since I have vouchers for 5 days of programming, and I can understand logic and understand how the .NET framework works... I would love to hear ideas on which MS Courses I should take. My primary focus is to work on web applications with web services that interact and do neat stuff... like for example, to create a 'chat' room or something interactive on the web. Or should I do something with HTML5/JS? I am really not sure... like I said, I want to work to make web services/sites. Not making the next Facebook mind you, but I'd like to work towards something in that spectrum on a much smaller scale. Please give me any advice, I'd like to book these classes asap Obviously getting involved with SQL and things that I will require would be important here.. you guys know better than me! Thanks!

    Read the article

  • Link to article on website libraries

    - by acidzombie24
    I just started another website and it has taken me 30mins to copy/paste my other website and delete stuff because I don't have a template. Theres lots of features I copied over that I haven't seen in libraries/templates. But I don't really know any libraries/templates. This site is ASP.NET. Some things I have is a string.format that escapes strings for HTML (so <hi> is text instead of a tag). Other features are adding or removing items in the url query, a class to pass in a ASP.NET error and log or convert it into a row in a db (I know about elmah but during development on my last site it wasn't Mono compatible), a mini AJAX library for success/fail/redirect/etc, a class to pass in a ASP.NET error and log or convert it into a row in a db and anything else I would use in every site. I don't like my (library) design because I wasn't expecting to do more then 2-3 websites and I am on my 5th. I don't know proper ASP.NET either so what is an article that explains how to make a great library/template for websites?

    Read the article

  • How can I run this script on startup, restart, and shutdown?

    - by Exeleration-G
    I'm using Ubuntu 11.10. I've written a script, that synchronises a directory in ~ with a directory on /dev/sda4, using Unison. Before, I had this script running every five minutes with no problems, using crontab. Right now, I want to execute this script at startup, restart and shutdown only. This is what the script looks like: #!/bin/bash unison -perms 0 -batch "/mnt/Data/Syncfolder/" "/home/myname/Syncfolder/" My crontab configuration was as follows: m h dom mon dow command 0,5,10,15,20,25,30,35,40,45,50,55 * * * * sh /usr/local/bin/s4lj.bash Note that I copied the script from ~ to /usr/local/bin/ first, to avoid root problems. I've read How to execute script on shutdown? and How to write an init script that will execute an existing start script?. After doing that, I've done this: I've made s4lj.bash executable, and then copied it to /etc/init.d/. For startup, I've made a symlink in /etc/rc2.d/ to /etc/init.d/s4lj.bash, and renamed it to S70s4lj.bash. For restart, I've made a symlink in /etc/rc6.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. For shutdown, I've made a symlink in /etc/rc0.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. Still, the script won't be run in any of these situations. How can I make the script get executed? I'd be happiest with a proper *.conf file in /etc/init. Thanks in advance.

    Read the article

  • How to develop Online Shopping Portal Application using PHP ?

    - by Sarang
    I do not know PHP & I have to develop a Shopping Portal with following Definition : Scenario: Online Shopping Portal XYZ.com wants to create an online shopping portal for managing its registered customers and their shopping. The customers need to register themselves first before they do shopping using the shopping portal. However, everyone, whether registered or not, can view the various products along with the prices listed in the portal. The registered customers, after logging in, are allowed to place order for one or more products from the products listed in the portal. Once the order is placed, the customer gets a reference order number and the order status should be “order in process”. The customers can track their order using the given reference number. The management of XYZ.com should be able to modify the order status of a particular reference order number to “shipped” once the products are shipped to the shipping address entered by the customer at the time of placing the order. The Functionalities required are : Create the interface for the XYZ.com shopping portal using HTML/XHTML and CSS. Implement the client side validations using JavaScript. Create the tables using MySQL. Implement the functionality using the server side scripting language, PHP. Integrate all the above tasks and make the XYZ.com shopping portal functional. How do I develop this application with following proper steps of development ?

    Read the article

  • Unity calendar lens not showing events

    - by David_G
    I'm trying to get proper/useful calendar integration into Ubuntu 12.04. I have a Google Calendar (& account) and I want to be able to use this without opening the browser. I want to get the Unity Calendar lens working, so that it shows events coming up, and it allows me a quick way to add new events. However, after installing it, it does not find any events, nor allow me to add a new event. Note that I've installed Lightning 1.4, Evolution mirror 0.2.3, Evolution, and unity-calendar lens. I've also installed Calendar-indicator. I suspect that somehow the lens is not getting the calendar information from thunderbird via evolution. A bit of searching around led me to try this command: /usr/lib/calendar-lens/calendar-lens-daemon.py. With this result: /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject Traceback (most recent call last): File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 324, in daemon = Daemon() File "/usr/lib/calendar-lens/calendar-lens-daemon.py", line 80, in init for calendar in evolution.ecal.list_calendars(): AttributeError: 'NoneType' object has no attribute 'list_calendars' Any ideas?

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • What are the best practices for phasing out obsolete code?

    - by P.Brian.Mackey
    I have the need to phase out an obsolete method. I am aware of the [Obsolete] attribute. Does Microsoft have a recommended best practice guide for doing this? Here's my current plan: A. I do not want to create a new assembly because developers would have to add a new reference to their projects and I expect to get a lot of grief from my boss and co-workers if they must do this. We also do not maintain multiple assembly versions. We only use the latest version. Changing this practice would require changing our deployment process which is a big issue (have to teach people how to do things with TFS instead of FinalBuilder and get them to give up FinalBuilder) B. Mark the old method obsolete. C. Because the implementation is changing (not the method signature), I need to rename the method rather than create an overload. So, to make users aware of the proper method I plan to add a message to the [Obsolete] attribute. This part bothers me, because the only change I'm making is decoupling the method from the connection string. But, because I'm not adding a new assembly, I see no way around this. Result: [Obsolete("Please don't use this anymore because it does not implement IMyDbProvider. Use XXX instead.")]; /// <summary> /// /// </summary> /// <param name="settingName"></param> /// <returns></returns> public static Dictionary<string, Setting> ReadSettings(string settingName) { return ReadSettings(settingName, SomeGeneralClass.ConnectionString); } public Dictionary<string, Setting> ReadSettings2(string settingName) { return ReadSettings(settingName);// IMyDbProvider.ConnectionString private member added to class. Probably have to make this an instance method. }

    Read the article

  • Fixing SharePoint 2010 Permission Problems on Windows 7

    - by Ricardo Peres
    I had a tough time trying to have SharePoint working perfectly on a Windows 7 development machine that was occasionally disconnected from the Active Directory (when I am home I must connect through a VPN). I mostly had problems with service applications such as User Profile, Managed Metadata, Business Connectivity Services and the like, and all I knew were cryptical messages such as “access denied” or “the service or application pool is not started”. I was sure that both the services and application pools were running under a domain account that had proper permissions on the SQL Server instance, and basically it was a fresh installation. Lots of people are having the same problem, apparently. After banging my head against the wall for several days, I remembered about farm (what I had) versus stand-alone (which I had never tried) installations. Bingo! Here’s what I did: I dropped all SharePoint databases and logins and reinstalled SP from scratch, only this time not in farm mode, but as stand-alone. After the SharePoint Configuration Wizard started, I cancelled it and started the Management Shell. I created the configuration database manually by using the New-SPConfigurationDatabase cmdlet where I specified a local account – something that the Configuration Wizard wouldn’t allow me to do. Then I restarted the Configuration Wizard and everything began working perfectly! Yes, I got some pre-configured service applications and also some content which I didn’t need, but I realized it was possible to drop and recreate everything the way I wanted to. All services and application pools are now running under local accounts, which is fine for my development needs. Really, Microsoft… I hope this will bring light to someone facing the same problems!

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Does DirectX implement Triple Buffering?

    - by Asik
    As AnandTech put it best in this 2009 article: In render ahead, frames cannot be dropped. This means that when the queue is full, what is displayed can have a lot more lag. Microsoft doesn't implement triple buffering in DirectX, they implement render ahead (from 0 to 8 frames with 3 being the default). The major difference in the technique we've described here is the ability to drop frames when they are outdated. Render ahead forces older frames to be displayed. Queues can help smoothness and stuttering as a few really quick frames followed by a slow frame end up being evened out and spread over more frames. But the price you pay is in lag (the more frames in the queue, the longer it takes to empty the queue and the older the frames are that are displayed). As I understand it, DirectX "Swap Chain" is merely a render ahead queue, i.e. buffers cannot be dropped; the longer the chain, the greater the input latency. At the same time, I find it hard to believe that the most widely used graphics API would not implement such fundamental functionality correctly. Is there a way to get proper triple buffered vertical synchronisation in DirectX?

    Read the article

  • Example of DOD design (on a generic Zombie game)

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • (Quaternion based) Trouble moving foward based on model rotation

    - by ChocoMan
    Using quaternions, I'm having trouble moving my model in its facing direction. Currently the model moves can move in all cardinal directions with no problems. The problem comes when I rotate the move as it still travelling in the direction of world space. Meaning, if I'm moving forward, backward or any other direction while rotating the model, the model acts like its a figure skater spinning while traveling in the same direction. How do I update the direction of travel proper with the facing direction of the model? Rotates model on Y-axis: Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromYawPitchRoll(yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Moves model forward: // Move Forward if (pController.IsButtonDown(Buttons.LeftThumbstickUp)) { SpeedX = (float)(Math.Sin(ModelLoad.ModelRotation)) * FWDSpeedMax * pController.ThumbSticks.Left.Y * (float)gameTime.ElapsedGameTime.TotalSeconds; SpeedZ = (float)(Math.Cos(ModelLoad.ModelRotation)) * FWDSpeedMax * pController.ThumbSticks.Left.Y * (float)gameTime.ElapsedGameTime.TotalSeconds; // Update model position ModelLoad._modelPos += Vector3.Forward * SpeedZ; ModelLoad._modelPos += Vector3.Left * SpeedX; }

    Read the article

  • Forwarding a subdomain to main domain using Godaddy

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Had anybody earned $0.25+ from each of a captcha (on your website) passing?

    - by vgv8
    I am a real dummy in web monetizing schemes. [ 1 ] informs: "Solve [Media] charges a fee of about 25 cents to 50 cents for each form that is filled out using a Type-In ad [captcha]... the company splits its fees 50-50 with the websites where the ads are placed" Honestly, I cannot imagine that someone (in its proper senses) pasy that much money for just one captcha passed. And how to understand these claims? http://www.solvemedia.com/images/ie9_aboutcalcount.png shows: Why would Microsot pay 0.25-0.5 USD for each entered string "Be part of the Beta"? Has any of webmasters (sysadmins) got those from deployed SolveMedia captchas on their websites? Is it scam? Because if to check the sites mentioned in http://www.solvemedia.com/gallery.html, that is, for ex., http://www.toyotanation.com/forum/register.php?do=register, the latter do not have such captchas. What do I miss? Cited: [ 1 ] Jennifer Valentino-DeVries "An Online Ad That’s Tough to Ignore" WallStreet Journal Blog SEPTEMBER 20, 2010 http://blogs.wsj.com/digits/2010/09/20/an-online-ad-thats-tough-to-ignore/

    Read the article

  • Little mysterious RowMatch

    - by kishore.kondepudi(at)oracle.com
    Incidentally this was the first piece of code i ever wrote in ADF.The requirement was we have tax rates which are read from a table.And there can be different type of tax rates called certificates or exceptions based on the rate_type column in the tax rates table.The simplest design i chose was to create an EO on the tax rates table and create two VO's called CertificateVO and ExceptionVO based on the same EO.So far so good.I wrote all the business logic in the EO and completed the model project.The CertificateVO has the query as select * from tax_rates TaxRateEO where rate_type='CERTIFICATE' and similary the ExceptionVO is also built.The UI is pretty simple and it has two tabs called Certificates and Exceptions and each table has a button to create a tax rate.The certificate tab is driven by CertificateVO and exception tab is driven by ExceptionVO.The CertificateVO has default value of rate_type set to 'CERTIFICATE' and ExceptionVO has default value of rate_type to 'EXCEPTION' to default values for new records.So far so good.But on running the UI i noticed a strange thing,When i create a new row in Certificate i see the same row in Exception too and vice-versa.i.e; what ever row i create in one VO it also appears in the second one although it shouldn't be.I couldn't understand the reason for behavior even though an explicit where clause is present.Digging through documentation i found that ADF doesnt apply the where clause to new rows instead it applies something called as RowMatch to them.RowMatch in simple terms is a where condition applied to the VO rows at runtime.Since we had both VO's based on the same EO we have the same entity cache.The filter factor for new rows to be shown in VO at runtime is actually RowMatch than the where clause defined in the VO.The default RowMatch is empty as a result any new row appears in both the VO's since its from same entity cache.The solution to this problem is to use polymorphic view objects which can do the row filter based on configuration or override the getRowMatch() method in the VOImpl and pass the custom where filter instead of default RowMatch.Eg:@Overridepublic RowMatch getRowMatch(){    return new RowMatch("rate_type='CERTIFICATE'");}similarly for ExceptionVO too.With proper RowMatch in place new rows will route themselves to appropriate VO.PS: The behavior(Same row pushed to both VO's from entity cache) is also called as ViewLink Consistency.Try it out!

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • Design of input files reading when it comes to defaults/transformations

    - by Stefano Borini
    Suppose you have an application that reads an input file, on a language that does not support the concept of None. The input is read, parsed, and the contents are stored on a structure for later use. Now, in general you want to keep into account transformation of the data from the input, such as adding default values when not specified, or adding full path information to relative path specified in the input. There are two different strategies to achieve this. The first strategy is to perform these transformations at input file reading time. In practice, you put all the intelligence into the input parser, and your application has no logic to deal with unexpected circumstances, such as an unspecified value. You lose the information of what was specified and what wasn't, but you gain in black-boxing the details. Your "running code" needs that information in any case and in a proper form, and is not concerned if it's the default or a user-specified information. The second strategy is to have the file reader a real one-to-one mapper from the file to a memory-stored object, with no intelligent behavior. unspecified values are not filled (which may however be a problem in languages not supporting None) and data is stored verbatim from the file. The intelligence for recovery must now go into the "running code", which must check what was specified in the file, eventually fall back to a default, or modify the input properly before using it. I would like to know your opinion on these two approaches, and in particular which one you found the most frequently implemented.

    Read the article

  • Best way to let users choose country/language when submiting an URL to a directory

    - by Claudiu
    Hi all, I want to offer the user the possibility to add the country/language for websites they would submit to a fairly simple website directory. I have a folder with flags from http://www.famfamfam.com/lab/icons/flags/ . The flag images are named according to the ISO 3166-1 alpha-2 country codes, meaning that I could make a PHP script that would be able to retrieve images and the name of the country retrieved from the image name (not the full name, but it wouldn't be necessary). Just to make things clearer, I couldn't find a proper combo-box jQuery plugin for my needs (that would act exactly like the native but with an icon before the text) and don't really have the time to develop one on my own. Considering the number of images, I also wouldn't just display them all with a radio box near them. Also, having a classic drop-down list would be a nightmare for me as I would have to assign the short country name manually to each entry, or do it once for every country. Offering the user a dropdown list with the short country names but no flag near them would also be unfriendly and confusing. The idea is that every website featured in the directory would have the country flag icon near it. I have the images named properly but I don't know how to let the user choose the right image for their website. Any idees? Thank you all in advance! EDIT Temporary solution is this file: http://www.andrewpatton.com/countrylist.csv It contains a list of countries including various other info, like the short country name, the same name that's used for the flag images. I can take that information and have a classic like this: <select name="countries"> <option value="ro">Romania</option> <option value="ie">Ireland</option> <!-- and so on --> </select> Still, If anybody has a better idea...

    Read the article

  • How do I use depth testing and texture transparency together in my 2.5D world?

    - by nbolton
    Note: I've already found an answer (which I will post after this question) - I was just wondering if I was doing it right, or if there is a better way. I'm making a "2.5D" isometric game using OpenGL ES (JOGL). By "2.5D", I mean that the world is 3D, but it is rendered using 2D isometric tiles. The original problem I had to solve was that my textures had to be rendered in order (from back to front), so that the tiles overlapped properly to create the proper effect. After some reading, I quickly realised that this is the "old hat" 2D approach. This became difficult to do efficiently, since the 3D world can be modified by the player (so stuff can appear anywhere in 3D space) - so it seemed logical that I take advantage of the depth buffer. This meant that I didn't have to worry about rendering stuff in the correct order. However, I faced a problem. If you use GL_DEPTH_TEST and GL_BLEND together, it creates an effect where objects are blended with the background before they are "sorted" by z order (meaning that you get a weird kind of overlap where the transparency should be). Here's some pseudo code that should illustrate the problem (incidentally, I'm using libgdx for Android). create() { // ... // some other code here // ... Gdx.gl.glEnable(GL10.GL_DEPTH_TEST); Gdx.gl.glEnable(GL10.GL_BLEND); } render() { Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); Gdx.gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); // ... // bind texture and create vertices // ... } So the question is: How do I solve the transparency overlap problem?

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >