Search Results

Search found 12439 results on 498 pages for 'bad practice'.

Page 25/498 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • apache2: bad user name www-data

    - by Robert Ross
    Starting web server apache2 apache2: bad user name www-data I just tried restarting my webserver because of an update I did to my php.ini and originally I was getting something about the PID file being overwritten. Now I just get this: * Starting web server apache2 apache2: bad user name www-data this has NEVER happened before, and I haven't changed and permissions or apache2 configuration files. What gives?

    Read the article

  • Returning IEnumerable from an indexer, bad practice?

    - by fearofawhackplanet
    If I had a CarsDataStore representing a table something like: Cars -------------- Ford | Fiesta Ford | Escort Ford | Orion Fiat | Uno Fiat | Panda Then I could do IEnumerable<Cars> fords = CarsDataStore["Ford"]; Is this a bad idea? It's iconsistent with the other datastore objects in my api (which all have a single column PK indexer), and I'm guessing most people don't expect an indexer to return a collection in this situation.

    Read the article

  • Getters and Setters are bad OO design?

    - by Dan
    Getters and Setters are bad Briefly reading over the above article I find that getters and setters are bad OO design and should be avoided as they go against Encapsulation and Data Hiding. As this is the case how can it be avoided when creating objects and how can one model objects to take this into account. In cases where a getter or setter is required what other alternatives can be used? Thanks.

    Read the article

  • Bad Code example

    - by Anantha Kumaran
    i am sure all of us must have written some bad code at some point in their life. I am a kind of guy who would love to learn from other people mistakes. So can you give some examples of bad code you have written and the way you corrected it.

    Read the article

  • How to send a HTTP "Bad Request" Response

    - by michael
    Hi, I am writing a C program which needs to send back a HTTP Bad Response. This is what I write to socket. HTTP/1.1 400 Bad Request\r\n Connection: close\r\n \r\n My question is why the broswer still spinning (like appear it is still loading something? Am I missing header in the Http response? OR I miss something else? Thank you.

    Read the article

  • Why should GoTos be bad?

    - by lisn
    I'm using gotos and a lot of them. C++, PHP or COBOL - I use them on nearly all occasions where everybody else would use functions or even classes. Yet my code is Clear Maintainable Bug-free Fast So why does everybody I meet tell me about how bad gotos are? Are there any facts that show that they are "bad"?

    Read the article

  • Bad disk performance on HP DL360 with Smarty Array P400i RAID controller

    - by sarge
    I have a HP DL360 server with 4x 146GB SAS disks and a Smart Array P400i RAID controller with 256MB cache. The disks are in RAID 5 (3 disks + 1 hot spare). The server is running VMware ESX 3i. The disk write performance is really bad. Here are some numbers: ns1:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 3364 MB in 2.00 seconds = 1685.69 MB/sec Timing buffered disk reads: 18 MB in 3.79 seconds = 4.75 MB/sec ns1:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 282.307 s, 3.6 MB/s real 4m52.003s user 0m2.160s sys 3m10.796s Compared to another server those number are terrible: Dell R200, 2x 500GB SATA disks, PERC raid controller (disks are mirrored). web4:~# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 6584 MB in 2.00 seconds = 3297.79 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.79 MB/sec web4:~# time sh -c "dd if=/dev/zero of=ddfile bs=8k count=125000 && sync" 125000+0 records in 125000+0 records out 1024000000 bytes (1.0 GB) copied, 35.2919 s, 29.0 MB/s real 0m36.570s user 0m0.476s sys 0m32.298s The server isn't very loaded and the VMware Infrastructure Client performance monitor is showing 550KBps average read and 1208KBps average write for the last 30 minutes (highest write rate: 6.6MBps). This has been a problem from the start. Any ideas?

    Read the article

  • Problem with apache + ssl: length mismatch error and ocasional bad request

    - by Ruben Garat
    we migrated a server from slicehost to linode recently, we copied the config from one server to the other. Everything works perfectly except that we get: Occasional errors with "Bad Request", this error is not common, you can use it all day and not see it, and the next day it will happen a lot. apart from that, a lot of the time, event though the request works fine we get some errors. using ssldump we get: New TCP connection #1: myip(39831) <-> develserk(443) 1 1 0.2316 (0.2316) C>S SSLv2 compatible client hello Version 3.1 cipher suites Unknown value 0x39 Unknown value 0x38 Unknown value 0x35 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA SSL2_CK_3DES Unknown value 0x33 Unknown value 0x32 Unknown value 0x2f SSL2_CK_RC2 TLS_RSA_WITH_RC4_128_SHA TLS_RSA_WITH_RC4_128_MD5 SSL2_CK_RC4 TLS_DHE_RSA_WITH_DES_CBC_SHA TLS_DHE_DSS_WITH_DES_CBC_SHA TLS_RSA_WITH_DES_CBC_SHA SSL2_CK_DES TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_DES40_CBC_SHA TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5 SSL2_CK_RC2_EXPORT40 TLS_RSA_EXPORT_WITH_RC4_40_MD5 SSL2_CK_RC4_EXPORT40 1 2 0.2429 (0.0112) S>C Handshake ServerHello Version 3.1 session_id[32]= 9a 1e ae c4 5f df 99 47 97 40 42 71 97 eb b9 14 96 2d 11 ac c0 00 15 67 4e f3 7d 65 4e c4 30 e9 cipherSuite Unknown value 0x39 compressionMethod NULL 1 3 0.2429 (0.0000) S>C Handshake Certificate 1 4 0.2429 (0.0000) S>C Handshake ServerKeyExchange 1 5 0.2429 (0.0000) S>C Handshake ServerHelloDone 1 6 0.4965 (0.2536) C>S Handshake ClientKeyExchange 1 7 0.4965 (0.0000) C>S ChangeCipherSpec 1 8 0.4965 (0.0000) C>S Handshake 1 9 0.5040 (0.0075) S>C ChangeCipherSpec 1 10 0.5040 (0.0000) S>C Handshake ERROR: Length mismatch from the apache error.log [Fri Aug 27 14:50:05 2010] [debug] ssl_engine_io.c(1892): OpenSSL: I/O error, 5 bytes expected to read on BIO#b80c1e70 [mem: b8100918] the server is ubuntu 10.04.1 the apache version is 2.2.14-5ubuntu8 the openssl version is 0.9.8k-7ubuntu8

    Read the article

  • BAD ARCHIVE MIRROR using PXE BOOT method

    - by omkar
    i m trying to automatically install UBUNTU on a client PC by using the method of PXE BOOT method....my Objectives are below:- i m following the steps given in this link installation using PXE BOOT INSTALL 1:-the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. 2:-the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server i have installed DHCP3-server,Apache2 and TFTP for helping me with the installation. i have nearly achieved my first objective,i m able to boot my client using the files stored in the server,but during the installation stage it is asking me to "CHOOSE A MIRROR of UBUNTU ARCHIVE".i gave the server's IP address and the path in the server where the files are located but then too its giving me error "BAD ARCHIVE MIRROR". so is it possible that instead of downloading all the files from the internet and storing them on my disk , can i use the files which comes with the UBUNTU-CD, and how to store this files in what format (should i zip them ) on the disk. secondly i am also generating the ks.cfg which i wanted to give to the client for automatic installation of the OS ,so how should the configuration file be given to the installation process.

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • How to test/debug bad network wiring?

    - by Jack Lloyd
    I recently bought a place already wired with Cat 5E (8 ports, leading to a central closet). However attempting to get link, nothing works. On closer examination, it was obvious that the ends in the closet were wired backwards (brown on pin 1, etc). The jacks that I've pulled out of the wall do look to be correctly done. However, testing with a network cable tester shows zero link between any of the jacks and any of the ports in the closet - I had expected to just see a 1/8, 2/7, ... 8/1 mismatch, but instead get nothing at all. The runs are accessible and look neat, though they take some bends that seem quite sharp and are in some cases much longer than they need to be (the person who put this in was a professional electrician but I suspect this was the first time he ran network cabling). My best guess at this point is that he either bought bad cable, or put so much tension on it that he snapped wires. Though it seems surprising/unlikely that I wouldn't get at least one active wire on one of the 8 lines. So, my question: is there anything else I should try or test before I go ripping out everything and running new cable?

    Read the article

  • Fix bad superblock on logical partition

    - by Chris
    I was following http://www.howtoforge.com/linux_resi...xt3_partitions and when i reboot and run: root@Microknoppix:/home/knoppix# fsck -n /dev/sda7 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda7 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> so i ran e2fsck with all the block numbers that you need (forget exactly what tool i used to find where the superblocks are hidden) no dice then i ran testdisk and had it look for the superblock, no results anyone have any ideas? fdisk -l for reference: root@Microknoppix:/home/knoppix# fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x97646c29 Device Boot Start End Blocks Id System /dev/sda1 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 38912 312046593 f W95 Ext'd (LBA) /dev/sda5 64 326 2104320 82 Linux swap / Solaris /dev/sda6 * 327 2938 20972544 83 Linux /dev/sda7 2938 38912 288968672+ 83 Linux To be honest it looks like I lost it... Next step if that happens is to dump the partition to an image file and hope i can find or write some software to parse through the data looking for known file headers, i think.

    Read the article

  • Win Svr 2003 DHCP Bad Addresses

    - by VinceM
    After looking at other posts I still can figure this out. I'll start at the beginning... I inherited this network and I'm not the most knowledgeable about networking... We have a AD DHCP Server that is also our DNS server, We were having some VPN issues (on the same server) and my boss decided to disable routing and remote access, which cleared the settings. We couldn't get it set back up correctly so we rolled back to a backup drive they created a number of months ago. Since rolling back I've had Bad_Address listings in DHCP and there is a number of duplicate records in the DNS Forward Lookup Zones. We have less than 50 devices on the network but I have over 90 Bad Addresses showing. This server is currently running but we get IP address conflicts all the time on pretty much all the computers. I have had people do release and renew but it didn't help... I have also deleted and re-added the scope to no avail either. Any help or ideas would be greatly appreciated and I apologize if I missed another post that has information to help. Thanks, Vince

    Read the article

  • Cisco 7206 error trying to copy running-config (Bad file number)

    - by jasondewitt
    I have a cisco 7206 that terminates a bunch of pppoa sessions for dsl users. Today I noticed that if I tried to "show run" nothing happened. I mean that it doesn't show anything and just sends me right back to the command prompt. I decided I should probably try and back up the config and that is where I'm stuck. Any time I try to copy the running-config to tftp or to pcmcia card that I know is not full I get the following error: %Error opening system:/running-config (Bad file number) I get this error when I try to do anything with the running config. I've been googling around, but I haven't found any thing else that talks about this error. I've seen people say to erase the nvram and then try to "copy run start", but I don't want to erase the nvram until I can pull off a copy of the running-config. I would try to reboot it, but the startup-config that is on the nvram looks to be woefully out of date (good job me!). Any ideas what might be wrong? or how I can get the running config off the router?

    Read the article

  • USB drive errors after airport scan

    - by bobobobo
    Well, I just got a new PNY usb drive and it passed through an airport scanner yesterday. For some reason, I wrote to it and then tried to read from it today, and it gave me a corrupted error! chkdsk reports errors like: Bad links in lost chain at cluster 1179 corrected. Lost chain cross-linked at cluster 1200. Orphan truncated. Lost chain cross-linked at cluster 1228. Orphan truncated. Lost chain cross-linked at cluster 1236. Orphan truncated. Lost chain cross-linked at cluster 1237. Orphan truncated. Lost chain cross-linked at cluster 1244. Orphan truncated. Lost chain cross-linked at cluster 1250. Orphan truncated. Lost chain cross-linked at cluster 1266. Orphan truncated. Lost chain cross-linked at cluster 1278. Orphan truncated. etc. What is this from? Could it possibly be from the airport scanner, or is it likely a defective USB chip? How can I check the chip to see if I should just return/throw it away or continue to use it?

    Read the article

  • Nginx 502 Bad Gateway: It just won't stop

    - by David
    I have the same problem that most people seem to have with Nginx: 502 bad gateway errors. They are intermittent but typically happen more than once per session, which means my users are probably running into it nearly every time they use the app. I've tried adjusting fastcgi_buffers and fastcgi_buffer_size (in both directions) to no avail. I've tried various other things with the configuration file but nothing seems to work. Here's my config (note that I've stripped away most of the things I've tried, since they didn't work and I didn't want to bloat the file with a bunch of un-related directives): server { root /usr/share/nginx/www/; index index.php; # Make site accessible from http://localhost/ server_name localhost; # Pass PHP scripts to PHP-FPM location ~ \.php { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; } # Lock the site location / { auth_basic "Administrator Login"; auth_basic_user_file /usr/share/nginx/.htpasswd; } # Hide the password file location ~ /\. { deny all; } client_max_body_size 8M; } I'm running a small Rackspace cloud server, which should be plenty for handling an app with a small user base...

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • Website latency and bad tcp packets

    - by Mistero Lupo
    I have multiple websites hosted on a Linode VPS and I'm having an issue with one of them: every page that I try to load has about 10 seconds latency. Apache logs are clean and the other websites on the same machine are running well. At a first glance I tought it was a memory problem since the VPS has got only 512M, but from the linode dashboard CPU and Disk I/O are normal. Anyway here we have the ram status: $ free -m total used free shared buffers cached Mem: 487 463 23 0 2 55 -/+ buffers/cache: 404 82 Swap: 255 155 100 Only 23M free, but if it was a memory problem why other websites are going as usual? I took a live capture with wireshark, and there are some duplicates SYN ACK packets just before the 10 seconds gap. I'm out of ideas, looking for some clues. Wireshark live capture screenshot As you can see from the image, the gap is after the last bad tcp. Thank you in advance. UPDATE I've checked Apache2 logs in debug error level, and this is where something is appening: 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'index.php' 151.97.156.191 - - [14/Nov/2012:11:19:40 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6801578/subreq] (1) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] pass through /home/fmaisi/sites/www.fmaisi.it/public_html/index.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] strip per-dir prefix: /home/fmaisi/sites/www.fmaisi.it/public_html/wp-content/plugins/wp-filebase/wp-filebase_css.php -> wp-content/plugins/wp-filebase/wp-filebase_css.php 151.97.156.191 - - [14/Nov/2012:11:19:54 +0100] [www.fmaisi.it/sid#7f32c625a220][rid#7f32c6537c78/initial] (3) [perdir /home/fmaisi/sites/www.fmaisi.it/public_html/] applying pattern '^index\.php$' to uri 'wp-content/plugins/wp-filebase/wp-filebase_css.php' As you can see there is a gap of 14 seconds after the pass through of index.php. Any suggestions? I'm out of ideas again.

    Read the article

  • Best practice for Python & Django constants

    - by Dylan Klomparens
    I have a Django model that relies on a tuple. I'm wondering what the best practice is for refering to constants within that tuple for my Django program. Here, for example, I'd like to specify "default=0" as something that is more readable and does not require commenting. Any suggestions? Status = ( (-1, 'Cancelled'), (0, 'Requires attention'), (1, 'Work in progress'), (2, 'Complete'), ) class Task(models.Model): status = models.IntegerField(choices=Status, default=0) # Status is 'Requires attention' (0) by default. EDIT: If possible I'd like to avoid using a number altogether. Somehow using the string 'Requires attention' instead would be more readable.

    Read the article

  • Assert a good practice or not ?

    - by rkenshin
    Is it a good practice to use Assert for function parameters to enforce their validity. I was going through the source code of Spring Framework and I noticed that they use Assert.notNull a lot. Here's an example public static ParsedSql parseSqlStatement(String sql) { Assert.notNull(sql, "SQL must not be null");} Here's Another one public NamedParameterJdbcTemplate(DataSource dataSource) { Assert.notNull(dataSource, "The [dataSource] argument cannot be null."); this .classicJdbcTemplate = new JdbcTemplate(dataSource); } public NamedParameterJdbcTemplate(JdbcOperations classicJdbcTemplate) { Assert.notNull(classicJdbcTemplate, "JdbcTemplate must not be null"); this .classicJdbcTemplate = classicJdbcTemplate; } Thank you

    Read the article

  • Best practice for storing global data in PHP?

    - by user281434
    Hi I'm running a web application that allows a user to log in. The user can add/remove content to his/her 'library' which is displayed on a page called "library.php". Instead of querying the database for the contents of the users library everytime they load "library.php", I want to store it globally for PHP when the user logs in, so that the query is only run once. Is there a best practice for doing this? fx. storing their library in an array in a session? Thanks for your time

    Read the article

  • Ruby on Rails protect_from_forgery best practice

    - by randombits
    I'm currently working on building a RESTful web api with ruby on rails. I haven't bothered putting a proper authentication scheme into the API yet as I'm ensuring that tests and the basic behavior of the API is working all locally first. Upon testing non-HTTP GET type requests such as HTTP POST/DELETE/PUT, stuff chokes because protect_from_forgery is on by default. How does this work when I'm working in practice since essentially the idea is in a RESTful API that there is no state. The client does not have a session or a cookie associated with the server. Each request is an atomic, self-executed request. The user will supply some credentials to ensure they are who they say they are, but other than that, does protect_from_forgery make sense at this point? Should it remain enabled?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >