Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 5/548 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • What are the best practices for service accounts?

    - by LockeCJ
    We're running several services in our company using a shared domain account. Unfortunately, the credentials for this account are widely distributed and being used frequently for both service and non-service purposes. This has led to a situation where it is possible that the services will be temporarily down due to this shared account being locked. Obviously, this situation needs to change. The plan is to change the services to run under a new account, but I don't think this goes far enough, as that account is subject to the same locking policy. My questions is this: Should we be setting up the service accounts differently than other domain accounts, and if we do, how do we manage those accounts. Please keep in mind that we are running a 2003 domain, and upgrading the domain controller is not a viable solution in the near term.

    Read the article

  • GPO best practices : Security-Group Filtering Versus OU

    - by Olivier Rochaix
    Good afternoon everyone, I'm quite new to Active Directory stuff. After upgraded Functional level of our AD from 2003 to 2008 R2 (I need it to put fine-grained password policy), I then start to reorganized my OUs. I keep in mind that a good OU organization facilitate application of GPO (and maybe GPP).But in the end, it feels more natural for me to use Security-group filtering (from Scope tab) to apply my policies, instead of direct OU. Do you think it is a good practice or should I stick to OU ? We are a small organisation with 20 users and 30-35 computers. So, we got a simple OU tree, but more subtle split with security-groups. The OU tree doesn't contain any objects except at the bottom level. Each bottom level OU contains Computers,Users, and of course security groups. These security groups contains Users & Computers of the same OU. Thanks for your advices, Olivier

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • best practices for setting development environment

    - by Sharique
    I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development. Currently I install everything on my main OS, even I use it a little. I use VMs a little. Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc) Keep primary development environment on main os and moveothers in VM. main os should be messed up keep things easy to organize (must) performance should be optimal

    Read the article

  • Win7 Domain User Profile- Desktop Icon management best practices request

    - by Doltknuckle
    Here's the situation: We have a large (5,000+ user) organization that is currently using folder redirection to manage the windows desktop icons. This folder is redirected to a network share where we can centrally manage the different sites and such. When a user tries to use a computer when the network is not available, they are unable to use any shortcuts in the Public folder. We only redirect the C:\Users\%username%\Desktop folder. Does anyone have any suggestions of how to go about managing desktop icons? We still want a central location to manage these items, but find a way to keep the system working when the network is unavailable. As a point of clarification, the network rarely goes down. We do have instances where a few computers do not have a network connection. Usually, something is simply unplugged. Since we have multiple sites, the line from a branch to the central office has gone down a few times. This is more of an attempt to maintain a positive end user experience when disconnected from the network.

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

  • best-practices to block social sites

    - by adopilot
    In our company we have around 100 workstation with internet access, And day by day situation getting more worst and worst from perspective of using internet access for the purpose of doing private jobs, and wasting time on social sites. Open hearted I am not for blocking sites like Facebook, Youtube, and others similar but day by day my colleagues do not finishing his tasks and while I looking at their monitor all time they are ruining IE or Mozilla and chat and things like that. In other way Ill like to block youtube sometime when We have very poor internet access speed, Here is my questions: Do other companies blocking social sites ? Do I need dedicated device for that like hardware firewall, super expensive router Or I can do that whit my existing FreeBSD 6.1 self made router with two lan cards and configured nat to act like router. I was trying do that using ipfw and routerfirewall but without success, My code looks like ipfw add 25 deny tcp from 192.168.0.0/20 to www.facebook.com ipfw add 25 deny udp from 192.168.0.0/20 to www.facebook. ipfw add 25 deny tcp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny udp from 192.168.0.0/20 to www.dernek. ipfw add 25 deny tcp from 192.168.0.0/20 to www.youtube. ipfw add 25 deny udp from 192.168.0.0/20 to www.youtube.com

    Read the article

  • Best practices for re-IP'ing / migrating servers and applications

    - by warren
    Some of this question would be highly application-specific, but what approaches do you take when looking to migrate applications from one server/platform to another and servers form one network segment to another? For applications that can't be re-IP'd (many exist in this category), the general answer is to nuke and pave (or extend a clusterable application, then remove the segment that needs to be "moved"). For "normal" applications (httpd, mail, directory services, etc), what are the checks ou perform before, during, and after a move to ensure the health of the migrated app/server? An example with Apache: backup httpd conf directory change httpd conf files to use new IP address of server change (or add) IP of server restart Apache verify web server still serves pages reboot server verify environment comes back up healthy

    Read the article

  • Hard Drive Bad Sector marking utility

    - by Kevin Boyd
    I already have Windows XP, During installing Ubuntu(dual boot) the disk drive just stuck up at one place and doesn't seem to move ahead.. Is there a disk bad sector mark utility that just marks these sectors so that the disk doesn't seek them later. I tried running Seagate Seatools on the drive but both the short test and long test fail even before they start even chkdsk /f/r doesn't seem to work as the system locks up at stage four.

    Read the article

  • Visual Studio 2010 Best Practices

    - by Etienne Tremblay
    I’d like to thank Packt for providing me with a review version of Visual Studio 2010 Best Practices eBook. In fairness I also know the author Peter having seen him speak at DevTeach on many occasions.  I started by looking at the table of content to see what this book was about, knowing that “best practices” is a real misnomer I wanted to see what they were.  I really like the fact that he starts the book by really saying they are not really best practices but actually recommend practices.  As a Team Foundation Server user I found that chapter 2 was more for the open source crowd and I really skimmed it.  The portion on Branching was well documented, although I’m not a fan of the testing branch myself, but the rest was right on. The section on merge remote changes (bring the outside to you) paradigm is really important and was touched on. Chapter 3 has good solid practices on low level constructs like generics and exceptions. Chapter 4 dives into architectural practices like decoupling, distributed architecture and data based architecture.  DTOs and ORMs are touched on briefly as is NoSQL. Chapter 5 is about deployment and is really a great primer on all the “packaging” technologies like Visual Studio Setup and Deployment (depreciated in 2012), Click Once and WIX the major player outside of commercial solutions.  This is a nice section on how to move from VSSD to WIX this is going to be important in the coming years due to the fact that VS 2012 doesn’t support VSSD. In chapter 6 we dive into automated testing practices, including test coverage, mocking, TDD, SpecDD and Continuous Testing.  Peter covers all those concepts really nicely albeit succinctly. Being a book on recommended practices I find this is really good. I really enjoyed chapter 7 that gave me a lot of great tips to enhance my Visual Studio “experience”.  Tips on organizing projects where good.  Also even though I knew about configurations I like that he put that in there so you can move all your settings to another machine, a lot of people don’t know about that. Quick find and Resharper are also briefly covered.  He touches on macros (depreciated in 2012).  Finally he touches on Continuous Integration a very important concept in today’s ALM landscape. Chapter 8 is all about Parallelization, threads, Async, division of labor, reactive extensions.  All those concepts are touched on and again generalized approaches to those modern problems are giving.       Chapter 9 goes into distributed apps, the most used and accepted practice in the industry for .NET projects the chapter tackles concepts like Scalability, Messaging and Cloud (the flavor of the month of distributed apps, although I think this will stick ;-)).  He also looks a protocols TCP/UDP and how to debug distributed apps.  He touches on logging and health monitoring. Chapter 10 tackles recommended practices for web services starting with implementing WCF services, which goes into all sort of goodness like how to host in IIS or self-host.  How to manual test WCF services, also a section on authentication and authorization.  ASP.NET Web services are also touched on in that chapter All in all a good read, nice tips and accepted practices.  I like the conciseness of the subjects and Peter touches on a lot of things in this book and uses a lot of the current technologies flavors to explain the concepts.   Cheers, ET

    Read the article

  • bad ram or bad motherboard

    - by user39508
    I have a computer which crashes after about 5-45 seconds of operation. It can run memtest86+, and it doesn't display any errors, but it doesn't prevent it from crashing within the time frame listed above. The heat sink appears to be installed correctly, and I don't think it is related to overheating. The motherboard is connected to the ram and a monitor, nothing else is installed. The processor is an atom 330, running memtest86+ 4.0. Any insight into if the ram is bad or if it is the motherboard/psu/cpu? Thanks!

    Read the article

  • Is it a bad practice to include all the enums in one file and use it in multiple classes?

    - by Bugster
    I'm an aspiring game developer, I work on occasional indie games, and for a while I've been doing something which seemed like a bad practice at first, but I really want to get an answer from some experienced programmers here. Let's say I have a file called enumList.h where I declare all the enums I want to use in my game: // enumList.h enum materials_t { WOOD, STONE, ETC }; enum entity_t { PLAYER, MONSTER }; enum map_t { 2D, 3D }; // and so on. // Tile.h #include "enumList.h" #include <vector> class tile { // stuff }; The main idea is that I declare all enums in the game in 1 file, and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in 1 place rather than having pages openned solely for accessing one enum. Is this a bad practice and can it affect performance in any way?

    Read the article

  • Server side Javascript best practices?

    - by Petteri Hietavirta
    We have a CMS built on Java and it has Mozilla Rhino for the server side JS. At the moment the JS code base is small but growing. Before it is too late and code has become a horrible mess I want to introduce some best practices and coding style. Obviously the name space control is pretty important. But how about other best practices - especially for Java programmers?

    Read the article

  • Apache Bad Request "Size of a request header field exceeds server limit" with Kerberos SSO

    - by Aurelin
    I'm setting up an SSO for Active Directory users through a website that runs on an Apache (Apache2 on SLES 11.1), and when testing with Firefox it all works fine. But when I try to open the website in Internet Explorer 8 (Windows 7), all I get is "Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Authorization: Negotiate [ultra long string]" My vhost.cfg looks like this: <VirtualHost hostname:443> LimitRequestFieldSize 32760 LimitRequestLine 32760 LogLevel debug <Directory "/data/pwtool/sec-data/adbauth"> AuthName "Please login with your AD-credentials (Windows Account)" AuthType Kerberos KrbMethodNegotiate on KrbAuthRealms REALM.TLD KrbServiceName HTTP/hostname Krb5Keytab /data/pwtool/conf/http_hostname.krb5.keytab KrbMethodK5Passwd on KrbLocalUserMapping on Order allow,deny Allow from all </Directory> <Directory "/data/pwtool/sec-data/adbauth"> Require valid-user </Directory> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/apache2/ssl.crt/hostname-server.crt SSLCertificateKeyFile /etc/apache2/ssl.key/hostname-server.key </VirtualHost> I also made sure that the cookies are deleted and tried several smaller values for LimitRequestFieldSize and LimitRequestLine. Another thing that seems weird to me is that even with LogLevel debug I won't get any logs about this. The log's last line is ssl_engine_kernel.c(1879): OpenSSL: Write: SSL negotiation finished successfully Does anyone have an idea about that?

    Read the article

  • Recover strategy single bad sector in moricon

    - by Damon
    This week, my harddisk made me an early christmas present in the form of a single defect sector. To make up for the puny size of the present, it chose a sector inside moricons.dll for that. This means that now the system takes about 5 minutes to boot before Windows gives up and moves on, and there's 2 dozen scary "critical failure" entries in the system log after every boot, which is annoying. OK, admittedly, I shouldn't complain, it could be worse, the bad sector could be in ntldr... SMART info more or less indicates (for what SMART can indicate anyway) that the drive is mostly OK. Soft Read Error Rate has a score of 96, and Current Pending Sector Count has a raw value of 8, which translates to a score of 100. Acronis DriveMonitor makes this an issue (lowering the overall rating to 75%), HDD Health calls it "excellent", giving an overall rating of 95% (which is what this harddisk from day one). No single score is below 95 (power on hours and spin up count), and most are 100 anyway. Well, whatever, I've seen drives with perfect SMART values fail from one second to the other, and drives with moderate values work for years. So, I'm inclined not to put too much weight into that overall. TL;DR Now... to the problem: I don't feel like trashing the disk just yet (that's planned with a new OS install upgrading to Win7 early next year, independently of this issue), but in the mean time, I would still like to have a smoothly running system again. Therefore, I feel tempted to tamper with it, but before I render my system entirely unusable (since I've never done this before), I'd like to verify that my planned procedere is likely to suceed in having a working system again: Copy moricons.dl_ from the Windows install disk, rename it to moricons.zip, and unzip it. This gives an intact 5.1.2600.2180 version (the broken one is 5.1.2600.5512 - but I guess this makes not much of a difference, since it's an icon-only DLL, and an outdated copy should work better than one that can't be read) Run chkdsk /r /f` which will "repair" the file (i.e. delete the file without asking, tell the drive to remap the sector, and toss some unreadable junk into a file with a hexadecimal number) Hopefully Windows still boots after this (is that a reasonable expectation, or do I need to have something like BartPE ready? -- but then again, what's that good for in case chkdsk has nuked the entire file system...) Delete the junk file generated by chkdsk, copy the new DLL to %windir%\system32 Reboot. Pray. Maybe I just shouldn't touch anything, since it still kind of works... if annoying, but it works. Unsure... But, is there anything fundamentally wrong with the planned approach? Is this a sensible approach at all?

    Read the article

  • Why do programmers seem to be such bad spellers?

    - by Joel Etherton
    Programming languages are very precise tools based on explicit grammars. They're very picky, and when being used they require an exacting amount of detail. C#, for instance, is case sensitive so even getting the case of an argument wrong will cause an error. Questions asked all over the StackExchange are replete with misspellings, grammatical errors, and other problems that seem to indicate a lack of attention to detail when it comes to the language itself. Now, I understand there are a lot of programmers out there whose native language is not English, and I am not directing this question (rant one might say) at them. I'm referring to the individuals who are clearly from an English speaking background who refuse to pay attention to these simple details. I am not perfect by any means, but I try to use the language correctly so that my meaning will be understood correctly. I find programmers misspelling variable names, classes, and all manner of words in any kind of technical documentation they might write. I have had to withstand code where I am repeatedly referring to the subit[sic] button or HttpWebResponse reponse. The general complaint about bad spelling is one thing, and it will always be there. I accept that. But my question/comment is about the proclivity of bad spelling within the programming community. I would think that people who deal with such exacting tools to be more naturally predisposed towards proper spelling. Yet this doesn't seem to be the case.

    Read the article

  • Are very short or abbreviated method/function names that don't use full words bad practice or a matter of style.

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • Are short abbreviated method/function names that don't use full words bad practice or a matter of style?

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • Best Practices to Accelerate Oracle VM Server Deployments

    - by Honglin Su
    IOUG (Independent Oracle User Group) Virtualization SIG is hosting the webcast on the best practices of Oracle VM server virtualization. July 11, 2012 - Best Practices to Accelerate Oracle VM Server on SPARC Deployments. Register here. To learn the best practices on Oracle VM Server for x86,  watch the session replay here. For more white paper about best practices, visit Oracle VM OTN page here.

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that echo statement would look even better if time was variable declared in the controller, because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • [hibernate - jpa ] good practices and bad practices

    - by blow
    Hi all, i have some questions about interaction with hibernate. openSession or getCurrentSession (without jta, thread insted)? How mix session operations with swing gui? Is good have something like this in a javabean class? public void actionPerformed(ActionEvent event) { // session code } Can i add methods to my entities that contains hql queries or is a bad practice? For example: // This method is in an entity MyOtherEntity.java class public int getDuration(){ Session session=HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); int sum=(Integer)session.createQuery("select sum(e.duration) as duration from MyEntity as e where e.myOtherEntity.id=:id group by e.name"). .setLong("id", getId()); .uniqueResult(); return sum; } In alternative how can i do this in a better and elegant way? Thanks.

    Read the article

  • Existing web-site CSS replacement (re-skinning) best-practices without changing the HTML

    - by Enigmativity
    I can see a number of other good answers to questions relating to CSS best-practices on stack overflow: How to Manage CSS Explosion CSS Conventions / Code Layout Models Are there any CSS standards that I should follow while writing my first stylesheet? What is the best method for tidying CSS? Best Practices - CSS Stylesheet Formatting But I think I have a different problem. I'm trying to "re-skin" an existing site that has been nicely built using div's and ul's, etc, and it has a good existing CSS file, but when I start making changes to the CSS I quickly find that I break the layout. My feeling is that it is very hard to get a feel for how all the CSS will work together and indeed what CSS is affecting parent and sibling elements in the HTML. So, my question is "what are the best-practices around re-skinning an existing web-site by replacing the CSS only and not modifying the existing HTML?" I can't change the classes, ids, node hierarchy, etc. An example of the particular site that I am trying to re-skin is http://demo.nopcommerce.com/. The existing CSS can be as complicated/detailed as this extract from the main CSS file: .header-selectors-wrapper { text-align: right; float: right; width: 500px; } .header-currencyselector { float: right; } .header-languageselector { float: left; } .header-taxDisplayTypeSelector { float: right; } .header-links-wrapper { float: right; text-align: right; width: 570px; } .header-links { border: solid 1px #9a9a9a; padding: 5px 5px 5px 5px; margin-bottom: 5px; display: inline-table; } .order-summary-content .cart .cart-item-row td, .wishlist-content .cart .cart-item-row td { border-bottom: 1px solid #c5c5c5; vertical-align: middle; line-height: 30px; } .order-summary-content .cart .cart-item-row td.product, .wishlist-content .cart .cart-item-row td.product { text-align: left; padding: 0px 10px 0px 10px; } .order-summary-content .cart .cart-item-row td.product a, .wishlist-content .cart .cart-item-row td.product a { font-weight: bold; } Any help would be appreciated.

    Read the article

  • Best Practices for working with files via c#

    - by user177883
    Application I work on generates several hundreds of files in a 15 minutes period of times. and the back end of the application takes these files and process them (updates database with those values). One problem is database locks. What are the best practices on working with several thousands of files to avoid locking and efficiently processing these files? Would it be more efficient to create a single file and process it? or process single file at a time? What are so common best practices?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >