Search Results

Search found 10815 results on 433 pages for 'stored procedure'.

Page 114/433 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Customizing / overriding ComboBox (2 replies)

    Hi, I would override a Windows.Forms.Combobox to have a MyObjectCollection instead of a ObjectCollection as Items property. I've try writing this code, but it seems that items are stored somewhere else (not in Items property). I can add items to collection but when I try to select an item from the combobox (at runtime) an exception tells me that there is no such element in Items (litterally it tel...

    Read the article

  • What do you do when practical problems get in the way of practical goals?

    - by P.Brian.Mackey
    UPDATE Source control is good to use. Sometimes, real world issues make it impractical to use. For example: If the team is not used to using source control, training problems can arise If a team member directly modifies code on the server, various issues can arise. Merge problems, lack of history, etc Let's say there's a project that is way out of sync. The physical files on the server differ in unknown ways over ~100 files. Merging would take not only a great knowledge of the project, but is also well beyond the ability to complete in the given time. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying source control saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming, across ~10 projects. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremly difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • General Policies and Procedures for Maintaining the Value of Data Assets

    Here is a general list for policies and procedures regarding maintaining the value of data assets. Data Backup Policies and Procedures Backups are very important when dealing with data because there is always the chance of losing data due to faulty hardware or a user activity. So the need for a strategic backup system should be mandatory for all companies. This being said, in the real world some companies that I have worked for do not really have a good data backup plan. Typically when companies tend to take this kind of approach in data backups usually the data is not really recoverable.  Unfortunately when companies do not regularly test their backup plans they get a false sense of security because they think that they are covered. However, I can tell you from personal and professional experience that a backup plan/system is never fully implemented until it is regularly tested prior to the time when it actually needs to be used. Disaster Recovery Plan Expanding on Backup Policies and Procedures, a company needs to also have a disaster recovery plan in order to protect its data in case of a catastrophic disaster.  Disaster recovery plans typically encompass how to restore all of a company’s data and infrastructure back to a restored operational status.  Most Disaster recovery plans also include time estimates on how long each step of the disaster recovery plan should take to be executed.  It is important to note that disaster recovery plans are never fully implemented until they have been tested just like backup plans. Disaster recovery plans should be tested regularly so that the business can be confident in not losing any or minimal data due to a catastrophic disaster. Firewall Policies and Content Filters One way companies can protect their data is by using a firewall to separate their internal network from the outside. Firewalls allow for enabling or disabling network access as data passes through it by applying various defined restrictions. Furthermore firewalls can also be used to prevent access from the internal network to the outside by these same factors. Common Firewall Restrictions Destination/Sender IP Address Destination/Sender Host Names Domain Names Network Ports Companies can also desire to restrict what their network user’s view on the internet through things like content filters. Content filters allow a company to track what webpages a person has accessed and can also restrict user’s access based on established rules set up in the content filter. This device and/or software can block access to domains or specific URLs based on a few factors. Common Content Filter Criteria Known malicious sites Specific Page Content Page Content Theme  Anti-Virus/Mal-ware Polices Fortunately, most companies utilize antivirus programs on all computers and servers for good reason, virus have been known to do the following: Corrupt/Invalidate Data, Destroy Data, and Steal Data. Anti-Virus applications are a great way to prevent any malicious application from being able to gain access to a company’s data.  However, anti-virus programs must be constantly updated because new viruses are always being created, and the anti-virus vendors need to distribute updates to their applications so that they can catch and remove them. Data Validation Policies and Procedures Data validation is very important to ensure that only accurate information is stored. The existence of invalid data can cause major problems when businesses attempt to use data for knowledge based decisions and for performance reporting. Data Scrubbing Policies and Procedures Data scrubbing is valuable to companies in one of two ways. The first can be used to clean data prior to being analyzed for report generation. The second is that it allows companies to remove things like personally Identifiable information from its data prior to transmit it between multiple environments or if the information is sent to an external location. An example of this can be seen with medical records in regards to HIPPA laws that prohibit the storage of specific personal and medical information. Additionally, I have professionally run in to a scenario where the Canadian government does not allow any Canadian’s personal information to be stored on a server not located in Canada. Encryption Practices The use of encryption is very valuable when a company needs to any personal information. This allows users with the appropriated access levels to view or confirm the existence or accuracy of data within a system by either decrypting the information or encrypting a piece of data and comparing it to the stored version.  Additionally, if for some unforeseen reason the data got in to the wrong hands then they would have to first decrypt the data before they could even be able to read it. Encryption just adds and additional layer of protection around data itself. Standard Normalization Practices The use of standard data normalization practices is very important when dealing with data because it can prevent allot of potential issues by eliminating the potential for unnecessary data duplication. Issues caused by data duplication include excess use of data storage, increased chance for invalidated data, and over use of data processing. Network and Database Security/Access Policies Every company has some form of network/data access policy even if they have none. These policies help secure data from being seen by inappropriate users along with preventing the data from being updated or deleted by users. In addition, without a good security policy there is a large potential for data to be corrupted by unassuming users or even stolen. Data Storage Policies Data storage polices are very important depending on how they are implemented especially when a company is trying to utilize them in conjunction with other policies like Data Backups. I have worked at companies where all network user folders are constantly backed up, and if a user wanted to ensure the existence of a piece of data in the form of a file then they had to store that file in their network folder. Conversely, I have also worked in places where when a user logs on or off of the network there entire user profile is backed up. Training Policies One of the biggest ways to prevent data loss and ensure that data will remain a company asset is through training. The practice of properly train employees on how to work with in systems that access data is crucial when trying to ensure a company’s data will remain an asset. Users need to be trained on how to manipulate a company’s data in order to perform their tasks to reduce the chances of invalidating data.

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

  • Using another browser than w3m for reading HTML helpfiles?

    - by NES
    I'm using fish as shell. The internal help is based on html files and when i type help, it opens w3m to view this help files. Since w3m is not my default browser i wonder where this configuration to start w3m for this is stored. I'd like to read the helpfiles with another browser. How can i setup another one for this purpose or perhaps where are the helpfiles located so i can open them manually in the browser.

    Read the article

  • WebFoundations

    - by csharp-source.net
    A simple, SEO Friendly, C#, ASP.NET, XML Content Management System (CMS) These 'WebFoundations' are a great starting block when developing an ASP.NET CMS. Features: * A WYSIWYG editor (FCKEditor) * Content caching (No IO overhead) * Multi language support (can be set on querystring or dropdown) * Search engine friendly URL's (url rewriting) * Easily themable (Build on ASP.Net Master Pages) * An image gallery control (it consumes XML Picasa exports) Web Foundation sites can be hosted on inexpensive hosting as there is NO Database requirement (all the data is stored in XML files).

    Read the article

  • Sql Injection Prevention

    To protect your application from SQL injection, perform the following steps: * Step 1. Constrain input. * Step 2. Use parameters with stored procedures. * Step 3. Use parameters with dynamic SQL.

    Read the article

  • How can I automatically mute the volume at every boot?

    - by ændrük
    Sometimes I forget to enable mute before shutting down my laptop. Can I set it up to be muted by default every time Ubuntu boots, before the login screen is displayed? When I try DoR's suggestion of sudo alsactl store, the settings stored in /var/lib/alsa/asound.state are lost on the next reboot. Something is using this file to automatically save the current volume settings every time I reboot.

    Read the article

  • Parsing T-SQL – The easy way

    - by Dave Ballantyne
    Every once in a while, I hit an issue that would require me to interrogate/parse some T-SQL code.  Normally, I would shy away from this and attempt to solve the problem in some other way.  I have written parsers before in the the past using LEX and YACC, and as much fun and awesomeness that path is,  I couldnt justify the time it would take. However, this week I have been faced with just such an issue and at the back of my mind I can remember reading through the SQLServer 2012 feature pack and seeing something called “Microsoft SQL Server 2012 Transact-SQL Language Service “.  This is described there as : “The SQL Server Transact-SQL Language Service is a component based on the .NET Framework which provides parsing validation and IntelliSense services for Transact-SQL for SQL Server 2012, SQL Server 2008 R2, and SQL Server 2008. “ Sounds just what I was after.  Documentation is very scant on this so dont take what follows as best practice or best use, just a practice and a use. Knowing what I was sort of looking for something, I found the relevant assembly in the gac which is the simply named ,’Microsoft.SqlServer.Management.SqlParser’. Even knowing that you wont find much in terms of documentation if you do a web-search, but you will find the MSDN documentation that list the members and methods etc… The “scanner”  class sounded the most appropriate for my needs as that is described as “Scans Transact-SQL searching for individual units of code or tokens.”. After a bit of poking, around the code i ended up with was something like [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $Sql = "Create Procedure MyProc as Select top(10) * from dbo.Table" $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $Start =0 $End = 0 $State =0 $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null $TokenPrs $Sql.Substring($Start,($end-$Start)+1) }catch{ $TokenPrs = $null } } As you can see , the $Sql variable holds the sql to be parsed , that is pushed into the $Parser object using SetSource,  and then we will use GetNext until the EOF token is returned.  GetNext will also return the Start and End character positions within the source string of the parsed text. This script’s output is : TOKEN_CREATE Create TOKEN_PROCEDURE Procedure TOKEN_ID MyProc TOKEN_AS as TOKEN_SELECT Select TOKEN_TOP top TOKEN_INTEGER 10 TOKEN_FROM from TOKEN_ID dbo TOKEN_TABLE Table note that the ‘(‘, ‘)’  and ‘*’ characters have returned a token type that is not present in the Microsoft.SqlServer.Management.SqlParser.Parser.Tokens Enum that has caused an error which has been caught in the catch block.  Fun, Fun ,Fun , Simple T-SQL Parsing.  Hope this helps someone in the same position,  let me know how you get on.

    Read the article

  • User password rejected on login screen but accepted on text console login

    - by MadsirR
    I had to force shutdown my Ubuntu 12.04 64-bit, after which I restarted and tried to log in as a normal user, which was rejected several times. I then logged in as guest and tty to my regular account with use of my normal password, which succeeded. (So the password is still valid.) How can I gain access again via the normal login procedure (welcome screen)? Update: When I tried to log on with my new password, it again was denied. When I deliberately tried to log on with a faulty password, an error message came back, saying: Access denied - wrong password. I suppose, the first time the password was not rejected, but the procedure was aborted for some reason. Some additional info after trying to find a solution: I am conviced it is a Compiz-issue. Why? before this happened, all sessions came to a grinding halt, regardless of being logged on in a 2D or 3d environment. I found a link saying that I should remove Compiz and proceed in a 2D environment, which initiall worked without a glitch, until my system went into a state of total obivium. Only after that, the above mentioned troubles appeared. In the meantime I have happened to find a thread with reference 17381, describing exactly what I have experienced. For now, I will try to cure this situation (later this week) and revert with the results, hopefully to close this post. In the meantime I cordially thank you all, even if you didn't kill the problem; you gave me the inspiration to look further and find a possible cure. Update2: After 15 hrs of trial-and-error I callled it quits (When I decided to tackle this problem, I've given myself 12 hrs, to avoid massive loss of time.) I decided to re-install Precise, since the "point 1" version has become availabe. Log-in is back to normal, as is the graphic environment. Response to mouse input is stil appalling, especially when I have a series of screens open as "children"of a "parent" screen. It still completely locks up. I have installed Enlightenment, Gnome classic, Gnome 3, Cinnamon and they all behave in a similar fashion. FOR THOSE WHO NEED A WAY-OUT IN SITUATIONS OF THE LIKE: Open a terminal with [Ctrl+Alt+F2]. Type [sudo killall Firefox] (or whatever application you wish to terminate). Key in your password. Return to your graphical screen with [Ctrl+Alt+F7], and Bob's your uncle. Just re-open Firefox like nothing happened. Next time you are stuck: [Ctrl+Alt+F2], upward arrow till you meet the command of your desire, [Ctrl+Alt+F7], etcetera. Hope this is of help. My next move will be to upgrade the kernel to 3.4 from the repositories for 12.10. However, since this entails a totally new situation, I will start a new thread on this site to avoid topic pollution I will keep you posted. Still.

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • lsusb - where device description comes from

    - by tommyk
    For one of my attached USB devices (2773:0104) I see no description in lsusb command output: user@Thinkpad-Laptop:~/binaries$ lsusb Bus 008 Device 002: ID 0a5c:217f Broadcom Corp. Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 036: ID 2773:0104 Where USB description is comming from, is it from device driver or is it stored in the hardware itself ?

    Read the article

  • SQL Server and the XML Data Type : Data Manipulation

    The introduction of the xml data type, with its own set of methods for processing xml data, made it possible for SQL Server developers to create columns and variables of the type xml. Deanna Dicken examines the modify() method, which provides for data manipulation of the XML data stored in the xml data type via XML DML statements.

    Read the article

  • Handling SQL Server Errors

    This article covers the basics of TRY CATCH error handling in T-SQL introduced in SQL Server 2005. It includes the usage of common functions to return information about the error and using the TRY CATCH block in stored procedures and transactions.

    Read the article

  • SQL Triggers and when or when not to use them.

    - by John Mitchell
    When I was originally learning about SQL I was always told, only use triggers if you really need to and opt to use stored procedures instead if possible. Now unfortunately at the time (a good few years ago) I wasn't as curious and caring about fundamentals as I am now so never did ask to the reason why. What's the communities opinion in this? Is it just someone's personal preference, or should triggers be avoided (just like cursors) unless there is a good reason for them.

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

    - by Rui d'Orey
    I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint : GLFW_RED_BITS GLFW_GREEN_BITS GLFW_BLUE_BITS GLFW_ALPHA_BITS GLFW_DEPTH_BITS GLFW_STENCIL_BITS What is the purpose of this? Usually their default values are enough? Where are those bits stored? In a buffer in the GPU? What do they affect? And by that I mean in what way Thank you in advance!

    Read the article

  • Name user object and user table correctly

    - by Marc
    It's maybe simple but I think about this every time I build a new application. How do you name the class for the current user of the application and for the orm class of the user table? Usually I have something like CurrentUser: Logged-in user, stored in session, info for last activity etc User: ORM Class (C# EF CodeFirst, but it doesn't matter) And yes, they could have the same name in different namespaces, but I don't really like that.

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • How to view files from host? (running inside VMWare Fusion)

    - by Dave Long
    I have just finished moving my development server into a Ubuntu 10.04 Server VM in VMWare Fusion 3. I have all of my mysql and tomcat stuff running and am now trying to connect to my actual site files which are stored on my mac under /{User Root}/Workspace/ColdFusion/. I know that normally you should be able to setup a shared folder in VMWare and find it under /mnt/hgfs/{Share Name}, but I can't find it. I am not sure if I have to manually mount it or what.

    Read the article

  • What is a good resource for learning how the .Net Framework works? [on hold]

    - by Till Death Developer
    I've been developing web apps, for a while now so i know how to get the job done, what i don't know is how every thing really works, i know some but the rest i can't get a grasp on like how abstract works, what happens when i instantiate an object from a class that inherits from an abstract class?, where things get stored Heap vs Stack?, in other words the Interview questions that i suck at, so any advice would be great, books, videos, online courses, whatever you can provide would really help me.

    Read the article

  • Quickly Copy Movie Files to Individually Named Folders

    - by DigitalGeekery
    Some HTPC media manager applications require movie files to be in stored in separate folders to properly store information such as cover art images and other metadata. Here we look at copying movie files to individual folders. If you already have a large movie collection stored in a single folder, we’ll show you how to quickly move those files into their own individually named folders. File2Folder FIle2folder is a handy portable app that automatically creates and moves movie files into a folder of the same filename. There is no installation needed. Simply download and run the .exe file (link below). Enter the current movie directory, or browse for the folder. File2folder now supports both local and network shares. When you are ready to create the folders and move the files, click Move! You’ll see the move progress displayed in the window. When the process is finished, you’ll have all your movie file in individual folders.   Change your mind? Just click the Undo! button…   …and the move and folder creation process will be undone. If you would like to have the folder monitored for new files, click the Start button. File2folder will process any new files it discovers every 180 seconds. To turn it off, click Stop. This simple little program is a huge timesaver for those looking to organize movie collections for their HTPC. We should also note that this will work with any files, not just videos. Download file2folder Similar Articles Productive Geek Tips Hack: Turn Off Debug Mode in VMWare Workstation 6 BetaAdd Images and Metadata to Windows 7 Media Center Movie LibraryAdd Folders to the Movie Library in Windows 7 Media CenterAutomatically Mount and View ISO files in Windows 7 Media CenterMove the Public Folder in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Learn To Use Functions

    <b>Begin Linux:</b> "Functions are a script within a script which can be defined by the user and stored in memory, allowing you to reuse the function repeatedly. This also provides a modular aspect that allows you to debug one function at a time by disabling functions."

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >