Search Results

Search found 10078 results on 404 pages for 'bad man'.

Page 41/404 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Why is 'using namespace std;' considered a bad practice in C++?

    - by Mana
    Okay, sorry for the simplistic question, but this has been bugging me ever since I finished high school C++ last year. I've been told by others on numerous occasions that my teacher was wrong in saying that we should have "using namespace std;" in our programs, and that std::cout and std::cin are more proper. However, they would always be vague as to why this is a bad practice. So, I'm asking now: Why is "using namespace std;" considered bad? Is it really that inefficient, or risk declaring ambiguous vars(variables that share the same name as a function in std namespace) that much? Or does this impact program performance noticeably as you get into writing larger applications? I'm sorry if this is something I should have googled to solve; I figured it would be nice to have this question on here regardless in case anyone else was wondering.

    Read the article

  • LSI 9260-8i w/ 6 256gb SSDs - RAID 5, 6, 10, or bad idea overall?

    - by Michael Pearson
    We're provisioning a new production server for our reasonably busy website. Our choice of host have available a 6 drive configuration with a LSI 9260-8i card. My initial thought was to fill all six bays with SSDs (Intel 520 256gb) and set them up in RAID. Good, bad, or terrible idea? Can the card handle it? Should we be using RAID 5, 6 or 10? This would be the first time the provider have filled all six slots for this rackmount with SSDs, so they're a bit hesitant. I'm wondering if somebody else with this card has done something similar in a production environment. We do about 43gb of writes per day and currently use about 300gb of storage. The server acts as webserver, database, and image store for approx 1 million files. The plan is to underprovision the SSDs by approximately 10% to 20% to increase their overall lifespan & performance. The fallback option is 2x480gb SSDs in RAID 1 and another 2x1TB HDDs in RAID 1. The motivation behind this is that the server rental cost difference between 2xSSDs and 6xSSDs is minimal (compared to the overall cost of the rental). We do not have any special high-IOPs requirements. However, if the configuration is known to work, I don't see a good reason to not use it and not have to worry about having separate 'fast and small' and 'slow and large' disks.

    Read the article

  • Apache2: 400 Bad Reqeust with Rewrite Rules, nothing in error log?

    - by neezer
    This is driving me nuts. Background: I'm using the built-in Apache2 & PHP that comes with Mac OS X 10.6 I have a vhost setup as follows: NameVirtualHost *:81 <Directory "/Users/neezer/Sites/"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> <VirtualHost *:81> ServerName lobster.dev ServerAlias *.lobster.dev DocumentRoot /Users/neezer/Sites/lobster/www RewriteEngine On RewriteCond $1 !^(index\.php|resources|robots\.txt) RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L,QSA] LogLevel debug ErrorLog /private/var/log/apache2/lobster_error </VirtualHost> This is in /private/etc/apache2/users/neezer.conf. My code in the lobster project is PHP with the CodeIgniter framework. Trying to load http://lobster.dev:81/ gives me: 400 Bad Request Normally, I'd go check my logs to see what caused it, yet my logs are empty! I looked in both /private/var/log/apache2/error_log and /private/var/log/apache2/lobster_error, and neither records ANY message relating to the 400. I have LogLevel set to debug in /private/etc/apache2/http.conf. Removing the rewrite rules gets rid of the error, but these same rules work on my MAMP host. I've double-checked and rewrite_module is loaded in my default Apache installation. My http.conf can be found here: https://gist.github.com/1057091 What gives? Let me know if you need any additional info. NOTE: I do NOT want to add the rewrite rules to .htaccess in the project directory (it's checked into a git repo and I don't want to touch it).

    Read the article

  • Why is it bad to map network drives in Windows?

    - by Beeblebrox
    There has been some spirited discussion within our IT department about mapping network drives. In particular, it has been said that mapping network drives is A Bad Thing and that adding DFS paths or network shares to your (Windows Explorer/Libraries) Favourites is a far better solution. Why is this the case? Personally I find the convenience of z:\folder to be better than \\server\path\folder', particularly with cmd line and scripting (of course I'm not talking about hard-coded links, naturally!). I have tried searching for pros and cons of mapped network drives, but I haven't seen anything other than 'should the network go down, the drive will be unavailable'. But this is a limitation of any network-accessed storage... I have also been told that mapped network drives poll the network when the network resource is unavailable, however I haven't found more information on this. Wouldn't this still be an issue with other network access mechanisms (that is, mapped Favourites) whenever Windows tries to enumerate the file system (for example, when a file/folder picker dialog is opened)? -- Do network drives poll the network any more than a Windows Explorer library/favourite?

    Read the article

  • Could 0x800CCC0E in Outlook be caused by bad wireless connection?

    - by AndrejaKo
    I have a computer connected to a WiFi access point/router/modem. Sometimes, I get page not found errors and similar when opening a browser window, sometimes pings fail and it looks like the router's signal isn't very good. On the other hand, I get around 4 bars of signal strength in windows and graph looks good in Inssider. I also never get dropped connection to the router. My main problem is that I often get errors (such as 0x800CCC0E) in Outlook 2010 that after some searching appear to be connected to bad server connection. I'm using GMail over IMAP and all settings are correct. I didn't have similar errors on my previous router, but I'm not 100% sure that they appeared after switching to current one. It may have worked for some time without errors. There are also around 3000 messages on the server and the size of mailbox is around 12 GiB, which may contribute to the problems. On the other hand, there are at least 24 other networks in the 2.4 GHz range which I'm using and the number may have increased since I switched routers. Should I try solving this by getting a router with stronger signal?

    Read the article

  • Use cell formatting (e.g. "Good", "Bad", "Neutral") in formulas?

    - by ngm
    I am compiling a comparison of different pieces of software in an Excel spreadsheet. It is a big long list of features (the rows), with each column being one of the applications I'm evaluating. I've used styles to visually show how well each product meets each feature, as well as the importance of that feature, and now I'm wondering if there's a way I can use those annotations in a formula. The table is like: . | Product A | Product B | Product C Feature A | blah blah blah Feature B | blah blah blah Feature C | blah blah blah .... | .... | etc | Where I've put 'blah' in the table above, in my actual spreadsheet is (potentially lengthy) descriptive text explaining something about this feature in the given product. I've then used the styles "Good", "Neutral" and "Bad" to visually annotate the description, to show how well each product meets that feature. For each feature I've also used the styles Accent4, 60% Accent4, 40% Accent4, etc, to annotate the importance of each feature. Now I'm wondering if somehow I can use those styles (the annotations) to tot up a total score for each product. e.g., Score for feature A = valueof(60% Accent4) * valueof(Good) Is it possible at all?

    Read the article

  • Will the removal of NAT (with the use of IPv6) be bad for consumers? [closed]

    - by Jonathan.
    Possible Duplicate: How will IPv6 impact everyday users? (World IPv6 Day) As I understand when we have finally made the switch to IPv6 not only will NAT be unnecessary but it is incompatible with IPv6? Will that mean that ISPs will have to serve multiple IP addresses per customer? Will they provide a range of addresses for each customer or as each device connects will they get an IP address that isn't necessarily near that of the other devices in their house? But overall will this be bad for the Internet users? as surely it will allow ISPs to see exactly how many devices are being used, and so allow them to charge for the use of additional IP addresses? And then if that happens, what happens when you try to connect an extra device to your network? Will it simply not get an IP address? In my home we have about 15-20 devices connected at once, but for places where there are hundreds of devices, it seems like the perfect opportunity for ISPs to charge more? I think I may have it completely wrong, so is there somewhere where there is an explanation of who things will work when IPv6 becomes the norm?

    Read the article

  • Enabling DNS for IPv6 infrastructure

    After successful automatic distribution of IPv6 address information via DHCPv6 in your local network it might be time to start offering some more services. Usually, we would use host names in order to communicate with other machines instead of their bare IPv6 addresses. During the following paragraphs we are going to enable our own DNS name server with IPv6 address resolving. This is the third article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. What's your name and your IPv6 address? $ sudo service bind9 status * bind9 is running If the service is not recognised, you have to install it first on your system. This is done very easy and quickly like so: $ sudo apt-get install bind9 Once again, there is no specialised package for IPv6. Just the regular application is good to go. But of course, it is necessary to enable IPv6 binding in the options. Let's fire up a text editor and modify the configuration file. $ sudo nano /etc/bind/named.conf.optionsacl iosnet {        127.0.0.1;        192.168.1.0/24;        ::1/128;        2001:db8:bad:a55::/64;};listen-on { iosnet; };listen-on-v6 { any; };allow-query { iosnet; };allow-transfer { iosnet; }; Most important directive is the listen-on-v6. This will enable your named to bind to your IPv6 addresses specified on your system. Easiest is to specify any as value, and named will bind to all available IPv6 addresses during start. More details and explanations are found in the man-pages of named.conf. Save the file and restart the named service. As usual, check your log files and correct your configuration in case of any logged error messages. Using the netstat command you can validate whether the service is running and to which IP and IPv6 addresses it is bound to, like so: $ sudo service bind9 restart $ sudo netstat -lnptu | grep "named\W*$"tcp        0      0 192.168.1.2:53        0.0.0.0:*               LISTEN      1734/named      tcp        0      0 127.0.0.1:53          0.0.0.0:*               LISTEN      1734/named      tcp6       0      0 :::53                 :::*                    LISTEN      1734/named      udp        0      0 192.168.1.2:53        0.0.0.0:*                           1734/named      udp        0      0 127.0.0.1:53          0.0.0.0:*                           1734/named      udp6       0      0 :::53                 :::*                                1734/named   Sweet! Okay, now it's about time to resolve host names and their assigned IPv6 addresses using our own DNS name server. $ host -t aaaa www.6bone.net 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: www.6bone.net is an alias for 6bone.net.6bone.net has IPv6 address 2001:5c0:1000:10::2 Alright, our newly configured BIND named is fully operational. Eventually, you might be more familiar with the dig command. Here is the same kind of IPv6 host name resolve but it will provide more details about that particular host as well as the domain in general. $ dig @2001:db8:bad:a55::2 www.6bone.net. AAAA More details on the Berkeley Internet Name Domain (bind) daemon and IPv6 are available in Chapter 22.1 of Peter Bieringer's HOWTO on IPv6. Setting up your own DNS zone Now, that we have an operational named in place, it's about time to implement and configure our own host names and IPv6 address resolving. The general approach is to create your own zone database below the bind folder and to add AAAA records for your hosts. In order to achieve this, we have to define the zone first in the configuration file named.conf.local. $ sudo nano /etc/bind/named.conf.local //// Do any local configuration here//zone "ios.mu" {        type master;        file "/etc/bind/zones/db.ios.mu";}; Here we specify the location of our zone database file. Next, we are going to create it and add our host names, our IP and our IPv6 addresses. $ sudo nano /etc/bind/zones/db.ios.mu $ORIGIN .$TTL 259200     ; 3 daysios.mu                  IN SOA  ios.mu. hostmaster.ios.mu. (                                2014031101 ; serial                                28800      ; refresh (8 hours)                                7200       ; retry (2 hours)                                604800     ; expire (1 week)                                86400      ; minimum (1 day)                                )                        NS      server.ios.mu.$ORIGIN ios.mu.server                  A       192.168.1.2server                  AAAA    2001:db8:bad:a55::2client1                 A       192.168.1.3client1                 AAAA    2001:db8:bad:a55::3client2                 A       192.168.1.4client2                 AAAA    2001:db8:bad:a55::4 With a couple of machines in place, it's time to reload that new configuration. Note: Each time you are going to change your zone databases you have to modify the serial information, too. Named loads the plain text zone definitions and converts them into an internal, indexed binary format to improve lookup performance. If you forget to change your serial then named will not use the new records from the text file but the indexed ones. Or you have to flush the index and force a reload of the zone. This can be done easily by either restarting the named: $ sudo service bind9 restart or by reloading the configuration file using the name server control utility - rndc: $ sudo rndc reconfig Check your log files for any error messages and whether the new zone database has been accepted. Next, we are going to resolve a host name trying to get its IPv6 address like so: $ host -t aaaa server.ios.mu. 2001:db8:bad:a55::2Using domain server:Name: 2001:db8:bad:a55::2Address: 2001:db8:bad:a55::2#53Aliases: server.ios.mu has IPv6 address 2001:db8:bad:a55::2 Looks good. Alternatively, you could have just ping'd the system as well using the ping6 command instead of the regular ping: $ ping6 serverPING server(2001:db8:bad:a55::2) 56 data bytes64 bytes from 2001:db8:bad:a55::2: icmp_seq=1 ttl=64 time=0.615 ms64 bytes from 2001:db8:bad:a55::2: icmp_seq=2 ttl=64 time=0.407 ms^C--- ios1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.407/0.511/0.615/0.104 ms That also looks promising to me. How about your configuration? Next, it might be interesting to extend the range of available services on the network. One essential service would be to have web sites at hand.

    Read the article

  • Can't install git on Ubuntu 12.10

    - by Lucas Windir
    I'm following these instructions to install git on my laptop: http://git-scm.com/download/linux When I do: $ sudo apt-get install git-core This is what my terinal shows: Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? E: Some packages could not be authenticated lucas@lucas-Inspiron-N5050:~$ sudo apt-get install git-core Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libasprintf0c2:i386 libcroco3:i386 libgettextpo0:i386 libgomp1:i386 libunistring0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: git git-man liberror-perl Suggested packages: git-daemon-run git-daemon-sysvinit git-doc git-el git-arch git-cvs git-svn git-email git-gui gitk gitweb The following NEW packages will be installed: git git-core git-man liberror-perl 0 upgraded, 4 newly installed, 0 to remove and 0 not upgraded. Need to get 6,825 kB of archives. After this operation, 15.3 MB of additional disk space will be used. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-man git git-core Install these packages without verification [y/N]? y Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main liberror-perl all 0.17-1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-man all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git amd64 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Err httpq://py.archive.ubuntu.com/ubuntu/ quantal/main git-core all 1:1.7.10.4-1ubuntu1 Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/libe/liberror-perl/liberrorperl_0.17-1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-man_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch httpq://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git_1.7.10.4-1ubuntu1_amd64.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) Failed to fetch http://py.archive.ubuntu.com/ubuntu/pool/main/g/git/git-core_1.7.10.4-1ubuntu1_all.deb Something wicked happened resolving 'py.archive.ubuntu.com:http' (-5 - No address associated with hostname) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? How could I install git on Ubuntu 12.10? I can't even do it from the Ubuntu Software Center. Thanks in advance!

    Read the article

  • ...Welche DB-Hintergrundprozesse sind für was zuständig?... wie ging das nochmal? Und wie heisst noch diese eine wichtige Data Dictionary View? ...

    - by britta.wolf
    ...Gab es da nicht mal ein gutes Oracle-Poster, wo man schnell nachschauen konnte und einen guten Überblick bekam? Viele Datenbankadministratoren haben das besagte Poster, das die Architektur und Prozesse sowie die Data Dictionary-Struktur der Oracle Datenbank beschreibt, vermisst! Daher wurde nun eine handliche kleine Flash-Applikation mit erweitertem Inhalt entwickelt - Oracle Database 11g: Interactive Quick Reference - die man sich hier downloaden kann (einfach auf den Button "Download now" klicken (Größe der Zip-Datei: 4.6 MB). Ist genial, muss man haben!!! :-)

    Read the article

  • Update: Super Hero

    While I was looking for a completely different article back in 2007, I came across my Super Hero & Super Villain rating... Well, it was time for an update: Your Super Hero results: You are Spider-Man Spider-Man 75% Supergirl 70% Green Lantern 70% Robin 57% The Flash 55% Hulk 50% Catwoman 50% Superman 45% Batman 40% Wonder Woman 40% Iron Man 40% You are intelligent, witty, a bit geeky and have great power and responsibility. Click here to take the Superhero Personality Test

    Read the article

  • Problem with script substitution when running script

    - by tucaz
    Hi! I'm new to Linux so this probably should be an easy fix, but I cannot see it. I have a script downloaded from official sources that is used to install additional tools for fsharp but it gives me a syntax error when running it. I tried to replace ( and ) by { and } but eventually it lead me to another error so I think this is not the problem since the script works for everybody. I read some articles that say that my bash version maybe is not the right one. I'm using Ubuntu 10.10 and here is the error: install-bonus.sh: 28: Syntax error: "(" unexpected (expecting "}") And this is line 27, 28 and 29: { declare -a DIRS=("${!3}") FILE=$2 And the full script: #! /bin/sh -e PREFIX=/usr BIN=$PREFIX/bin MAN=$PREFIX/share/man/man1/ die() { echo "$1" &2 echo "Installation aborted." &2 exit 1 } echo "This script will install additional material for F# including" echo "man pages, fsharpc and fsharpi scripts and Gtk# support for F#" echo "Interactive (root access needed)" echo "" # ------------------------------------------------------------------------------ # Utility function that searches specified directories for a specified file # and if the file is not found, it asks user to provide a directory RESULT="" searchpaths() { declare -a DIRS=("${!3}") FILE=$2 DIR=${DIRS[0]} for TRYDIR in ${DIRS[@]} do if [ -f $TRYDIR/$FILE ] then DIR=$TRYDIR fi done while [ ! -f $DIR/$FILE ] do echo "File '$FILE' was not found in any of ${DIRS[@]}. Please enter $1 installation directory:" read DIR done RESULT=$DIR } # ------------------------------------------------------------------------------ # Locate F# installation directory - this is needed, because we want to # add environment variable with it, generate 'fsharpc' and 'fsharpi' and also # copy load-gtk.fsx to that directory # ------------------------------------------------------------------------------ PATHS=( $1 /usr/lib/fsharp /usr/lib/shared/fsharp ) searchpaths "F# installation" FSharp.Core.dll PATHS[@] FSHARPDIR=$RESULT echo "Successfully found F# installation directory." # ------------------------------------------------------------------------------ # Check that we have everything we need # ------------------------------------------------------------------------------ [ $(id -u) -eq 0 ] || die "Please run the script as root." which mono /dev/null || die "mono not found in PATH." # ------------------------------------------------------------------------------ # Make sure that all additional assemblies are in GAC # ------------------------------------------------------------------------------ echo "Installing additional F# assemblies to the GAC" gacutil -i $FSHARPDIR/FSharp.Build.dll gacutil -i $FSHARPDIR/FSharp.Compiler.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Interactive.Settings.dll gacutil -i $FSHARPDIR/FSharp.Compiler.Server.Shared.dll # ------------------------------------------------------------------------------ # Install additional files # ------------------------------------------------------------------------------ # Install man pages echo "Installing additional F# commands, scripts and man pages" mkdir -p $MAN cp *.1 $MAN # Export the FSHARP_COMPILER_BIN environment variable if [[ ! "$OSTYPE" =~ "darwin" ]]; then echo "export FSHARP_COMPILER_BIN=$FSHARPDIR" fsharp.sh mv fsharp.sh /etc/profile.d/ fi # Generate 'load-gtk.fsx' script for F# Interactive (ask user if we cannot find binaries) PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gtk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gtk#" gtk-sharp.dll PATHS[@] GTKDIR=$RESULT echo "Successfully found Gtk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/glib-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Glib" glib-sharp.dll PATHS[@] GLIBDIR=$RESULT echo "Successfully found Glib# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/atk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Atk#" atk-sharp.dll PATHS[@] ATKDIR=$RESULT echo "Successfully found Atk# root directory." PATHS=( /usr/lib/mono/gtk-sharp-2.0 /usr/lib/cli/gdk-sharp-2.0 /Library/Frameworks/Mono.framework/Versions/2.8/lib/mono/gtk-sharp-2.0 ) searchpaths "Gdk#" gdk-sharp.dll PATHS[@] GDKDIR=$RESULT echo "Successfully found Gdk# root directory." cp bonus/load-gtk.fsx load-gtk1.fsx sed "s,INSERTGTKPATH,$GTKDIR,g" load-gtk1.fsx load-gtk2.fsx sed "s,INSERTGDKPATH,$GDKDIR,g" load-gtk2.fsx load-gtk3.fsx sed "s,INSERTATKPATH,$ATKDIR,g" load-gtk3.fsx load-gtk4.fsx sed "s,INSERTGLIBPATH,$GLIBDIR,g" load-gtk4.fsx load-gtk.fsx rm load-gtk1.fsx rm load-gtk2.fsx rm load-gtk3.fsx rm load-gtk4.fsx mv load-gtk.fsx $FSHARPDIR/load-gtk.fsx # Generate 'fsharpc' and 'fsharpi' scripts (using the F# path) # 'fsharpi' automatically searches F# root directory (e.g. load-gtk) echo "#!/bin/sh" fsharpc echo "exec mono $FSHARPDIR/fsc.exe --resident \"\$@\"" fsharpc chmod 755 fsharpc echo "#!/bin/sh" fsharpi echo "exec mono $FSHARPDIR/fsi.exe -I:\"$FSHARPDIR\" \"\$@\"" fsharpi chmod 755 fsharpi mv fsharpc $BIN/fsharpc mv fsharpi $BIN/fsharpi Thanks a lot!

    Read the article

  • 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)

    - by C.Muetzlitz
    Externe Einflüsse wie Gesetze fordern die IT auf, (unsere) Daten zu schützen. Doch wie prüft man die eingestellte Sicherheit einer Oracle Datenbank überhaupt? Ist die geforderte Sicherheit ausreichend umgesetzt und zwar im Idealfall entsprechend dem notwendigen Schutzbedarf? Wann haben Sie eigentlich die Sicherheit Ihrer Oracle Datenbank das letzte Mal überprüft? Und noch besser gefragt, kennen Sie die Bedrohungen und die davon abgeleiteten Risiken? Alles Fragen deren Antworten ein verantwortlicher Anwendungsbesitzer sofort parat haben sollte oder sehen Sie das anders? Wie kann man sich am besten vor Bedrohungen schützen? Die einzige richtige Antwort auf diese Frage ist, durch Informationen und daraus abgeleitetes Wissen. Nun umfassen Informationen und das darin versteckte Wissen wahrscheinlich sehr viele Quellen. D.h. es wird immer schwieriger sich das richtige Wissen anzueignen und dieses Wissen für den Schutz von Daten und Datenbanken anzuwenden.Betrachtet man die Oracle Datenbank, dann empfehle ich zwei wesentliche Bereiche, die man tun muss bzw. wissen sollte. Die Best Practices Lösungen kennen, die man implementieren sollte und teilweise muss, um gute Sicherheit zu garantieren.Ich nenne diesen Bereich „13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“ Wie sieht der wirkliche Sicherheitszustand einer Oracle Datenbank aus.Diesen Bereich nenne ich „Check Oracle DB Security“ In diesem Beitrag möchte ich Sie nun in die Grundlagen einer guten Oracle Datenbank Sicherheit einführen und Sie befähigen, den Sicherheitszustand Ihrer Datenbank selber bestimmen zu können. 13 Lösungen für eine höhere Sicherheit in einer Oracle Datenbank (Best Practices)“  Password-Management aktiveren:Seien Sie sich bewusst, dass schwache Passwords eine hohe Bedrohung bedeuten. Aktivieren Sie ein vernünftiges Password Management Kennen Sie den Funktionsumfang Ihrer aktuellen Datenbank Version, auch die Funktionen, die nicht mehr unterstützt werden.Der "New Feature und Upgrade Guide" sollte eine Pflichtlektüre werden. Implementieren Sie eine passende Mindestsicherheit.Oracle liefert hier viele Vorgaben. Haben Sie das Rollen- und Account Management im GriffHier geht es um eine kontrollierte Privilegien-Vergabe (Least Privileg), eine Zwecktrennung im Account Management und eine andauernde Überprüfung des Rollenmanagements und Zugriffskonzepts Sicheres Datenbank Link Konzept implementierenGerade im Bereich der Datenintegration werden wiederholt DB Links in der Datenbank konfiguriert. Diese Links eröffnen u.U. unkontrollierte Zugriffe auf entfernte Datenbanken. Tracken Sie den Zugriff und setzen Sie ein sicheres DB Link Konzept um. Oracle liefert hier die entsprechenden Vorgaben. Definieren Sie Schutz-Policies für Ihre Anwendungen.Hierunter fällt z.B. ein richtiges Anwendungs-Owner und Anwendungs-User Setup Implementieren Sie den notwendigen Datenschutz für wichtige DatenKennen Sie die Daten, die geschützt werden müssen und schützen Sie diese angemessen. Kontrollieren Sie den Ressourcenverbrauch in Ihrer Datenbank Implementieren Sie eine sinnvolle Zwecktrennung in der DatenbankAuch bei der Datenbank ist es sinnvoll eine Zwecktrennung zu implementieren. Schalten Sie eine sinnvolle und gesetzeskonforme Protokollierung ein.Gesetze erfordern das und Oracle gibt eine Mindestprotokollierung vor. Implementieren Sie Prozesse, die den guten Zustand der Datenbank erhalten Führen Sie regelmäßige Health- Checks durchOracle liefert z.B. mit dem Enterprise Manager eine vollständige Library. Definieren Sie ein funktionierendes Patch-ManagementKennen Sie die Critical Patch Updates und handeln Sie falls notwendig. Check Oracle DB Security oder wer den Sicherheitszustand nicht kennt, wird auch keine Maßnahmen ergreifen Den Sicherheitszustand einer Oracle Datenbank zu überprüfen, ist sehr wichtig. Hierfür kann man verschiedene Anwendungen nutzen, die im Markt erhältlich sind. Eine gute Entscheidung wäre z.B. den Oracle Enterprise Manager (Cloud Control) mit dem Lifecycle Management zu nutzen, der periodisch den Sicherheitszustand für Sie ermittelt. Eine manuelle Überprüfung ist auch möglich, erfordert aber tiefes Wissen. Doch auch trotz der hohen Wissensanforderung ist ein Verstehen, wie man eine Oracle Datenbank manuell auf Sicherheit überprüft, wichtig. Vertrauen Sie nicht mehr auf Vermutungen, sondern nehmen Sie die Sicherheit Ihrer Datenbank ernst und lernen Sie den realen Zustand Ihrer Datenbank kennen. Wissen über reale Zustände und Wissen über geeignete Konzepte schützen. Erst dann können Sie entscheiden, welche Maßnahmen tatsächlich notwendig sind. Weiterführende Informationen: Oracle Online Dokumentation für die Datenbank Verschiedene Artikel in der Knowledge Base vom Oracle Support Das neue Buch „Oracle Security in der Praxis. Vollständige Sicherheitsüberprüfung Ihrer Oracle Datenbank“.

    Read the article

  • nginx 502 bad gateway - fastcgi not listening? (Debian 5)

    - by Sean
    I have experience with nginx but it's always been pre-installed for me (via VPS.net pre-configured image). I really like what it does for me, and now I'm trying to install it on my own server with apt-get. This is a fairly fresh Debian 5 install. I have few extra packages installed but they're all .deb's, no manual compiling or anything crazy going on. Apache is already installed but I disabled it. I did apt-get install nginx and that worked fine. Changed the config around a bit for my needs, although the same problem I'm about to describe happens even with the default config. It took me a while to figure out that the default debian package for nginx doesn't spawn fastcgi processes automatically. That's pretty lame, but I figured out how to do that with this script, which I found posted on many different web sites: #!/bin/bash ## ABSOLUTE path to the PHP binary PHPFCGI="/usr/bin/php5-cgi" ## tcp-port to bind on FCGIPORT="9000" ## IP to bind on FCGIADDR="127.0.0.1" ## number of PHP children to spawn PHP_FCGI_CHILDREN=10 ## number of request before php-process will be restarted PHP_FCGI_MAX_REQUESTS=1000 # allowed environment variables sperated by spaces ALLOWED_ENV="ORACLE_HOME PATH USER" ## if this script is run as root switch to the following user USERID=www-data ################## no config below this line if test x$PHP_FCGI_CHILDREN = x; then PHP_FCGI_CHILDREN=5 fi ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_CHILDREN" ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_MAX_REQUESTS" ALLOWED_ENV="$ALLOWED_ENV FCGI_WEB_SERVER_ADDRS" if test x$UID = x0; then EX="/bin/su -m -c \"$PHPFCGI -q -b $FCGIADDR:$FCGIPORT\" $USERID" else EX="$PHPFCGI -b $FCGIADDR:$FCGIPORT" fi echo $EX # copy the allowed environment variables E= for i in $ALLOWED_ENV; do E="$E $i=${!i}" done # clean environment and set up a new one nohup env - $E sh -c "$EX" &> /dev/null & When I do a "ps -A | grep php5-cgi", I see the 10 processes running, that should be ready to listen. But when I try to view a web page via nginx, I just get a 502 bad gateway error. After futzing around a bit, I tried telneting to 127.0.0.1 9000 (fastcgi is listening on port 9000, and nginx is configured to talk to that port), but it just immediately closes the connection. This makes me think the problem is with fastcgi, but I'm not sure what I can do to test it. It may just be closing the connection because it's not getting fed any data to process, but it closes immediately so that makes me think otherwise. So... any advice? I can't figure it out. It doesn't help that it's 1AM, but I'm going crazy here!

    Read the article

  • Live CD has black screen HP DV6

    - by Shaun Killingbeck
    Attempting to install/try ubuntu (11.10, 12.04) on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0, and also now i915.modeset=1 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. Update: Tried the ?orkarounds on https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen - set gfxpayload=text changed nothing, removing splash did nothing and setting vesafb.nonsense=1 did nothing either. I'd like to be able to collect some log information somehow, but I can't get to a command line from the liveCD. tried using the latest 12.04 beta, same issue tried nomodeset without splash or quiet. get the following (tail of) output before it freezes on that screen: * Starting configure network device security [OK] * Starting configure network device [OK] [ 25.720899] ieee80211 phy0: w1_ops_config: change monitor mode: false (implement) [ 25.720923] ieee80211 phy0: w1_ops_config: change power-save mode: false (implement) * Starting restore sound card(s') mixer state(s) [fail] [ 25.721849] ieee80211 phy0: w1_ops_bss_info_changed: qos enabled: false (implement) * Stopping save kernel messages [OK] * Starting bluetooth [OK] * PulseAudio configured for per-user sessions saned disabled; edit /etc/default/saned [ 25.988016] hci_cmd_timer: hci0 command tx timeout [ 26.207225] bad LUN (0:1) [ 26.223735] bad target number (1:0) [ 26.252111] bad target number (2:0) [ 26.272170] bad target number (3:0) [ 26.300154] bad target number (4:0) [ 26.328162] bad target number (5:0) [ 26.344180] bad target number (6:0) [ 26.368142] bad target number (7:0) * Checking battery state... [OK] * Stopping System V runlevel capability [OK] Does this give any indication of the problem? the false (implement) messages also reappear when I press the power button to ask it to shutdown, followed by a [fail] status for killing remaining processes.

    Read the article

  • Xna GS 4 Animation Sample bone transforms not copying correctly

    - by annonymously
    I have a person model that is animated and a suitcase model that is not. The person can pick up the suitcase and it should move to the location of the hand bone of the person model. Unfortunately the suitcase doesn't follow the animation correctly. it moves with the hand's animation but its position is under the ground and way too far to the right. I haven't scaled any of the models myself. Thank you. The source code (forgive the rough prototype code): Matrix[] tran = new Matrix[man.model.Bones.Count];// The absolute transforms from the animation player man.model.CopyAbsoluteBoneTransformsTo(tran); Vector3 suitcasePos, suitcaseScale, tempSuitcasePos = new Vector3();// Place holders for the Matrix Decompose Quaternion suitcaseRot = new Quaternion(); // The transformation of the right hand bone is decomposed tran[man.model.Bones["HPF_RightHand"].Index].Decompose(out suitcaseScale, out suitcaseRot, out tempSuitcasePos); suitcasePos = new Vector3(); suitcasePos.X = tempSuitcasePos.Z;// The axes are inverted for some reason suitcasePos.Y = -tempSuitcasePos.Y; suitcasePos.Z = -tempSuitcasePos.X; suitcase.Position = man.Position + suitcasePos;// The actual Suitcase properties suitcase.Rotation = man.Rotation + new Vector3(suitcaseRot.X, suitcaseRot.Y, suitcaseRot.Z); I am also copying the bone transforms from the animation player in the Person class like so: // The transformations from the AnimationPlayer Matrix[] skinTrans = new Matrix[model.Bones.Count]; skinTrans = player.GetBoneTransforms(); // copy each transformation to its corresponding bone for (int i = 0; i < skinTrans.Length; i++) { model.Bones[i].Transform = skinTrans[i]; }

    Read the article

  • APEX-Berichte automatisch aktualisieren

    - by carstenczarski
    Einen Bericht auf einer Anwendungsseite in regelmäßigen Abständen zu aktualisieren, ist recht einfach: Seit APEX 4.0 muss man noch nicht einmal JavaScript-Code dafür programmieren; mit einem einfach zu nutzenden Plugin des APEX-Entwicklerteams setzt man das in kürzester Zeit um. In diesem Tipp gehen wir noch etwas weiter: Für eine Tabelle, die eine Spalte mit dem Zeitpunkt der letzten Änderung enthält, wollen wir die zuletzt geänderten Werte hervorheben, so dass man sie leichter erkennen kann.

    Read the article

  • What makes static initialization functions good, bad, or otherwise?

    - by Richard Levasseur
    Suppose you had code like this: _READERS = None _WRITERS = None def Init(num_readers, reader_params, num_writers, writer_params, ...args...): ...logic... _READERS = new ReaderPool(num_readers, reader_params) _WRITERS = new WriterPool(num_writers, writer_params) ...more logic... class Doer: def __init__(...args...): ... def Read(self, ...args...): c = _READERS.get() try: ...work with conn finally: _READERS.put(c) def Writer(...): ...similar to Read()... To me, this is a bad pattern to follow, some cons: Doers can be created without its preconditions being satisfied The code isn't easily testable because ConnPool can't be directly mocked out. Init has to be called right the first time. If its changed so it can be called multiple times, extra logic has to be added to check if variables are already defined, and lots of NULL values have to be passed around to skip re-initializing. In the event of threads, the above becomes more complicated by adding locking Globals aren't being used to communicate state (which isn't strictly bad, but a code smell) On the other hand, some pros: its very convenient to call Init(5, "user/pass", 2, "user/pass") It simple and "clean" Personally, I think the cons outweigh the pros, that is, testability and assured preconditions outweigh simplicity and convenience.

    Read the article

  • My program is spending most of its time in objc_msgSend. Does that mean that Objective-C has bad per

    - by Paperflyer
    Hello Stackoverflow. I have written an application that has a number of custom views and generally draws a lot of lines and bitmaps. Since performance is somewhat critical for the application, I spent a good amount of time optimizing draw performance. Now, activity monitor tells me that my application is usually using about 12% CPU and Instrument (the profiler) says that a whopping 10% CPU is spent in objc_msgSend (mostly in drawing related system calls). On the one hand, I am glad about this since it means that my drawing is about as fast as it gets and my optimizations where a huge success. On the other hand, it seems to imply that the only thing that is still using my CPU is the Objective-C overhead for messages (objc_msgSend). Hence, that if I had written the application in, say, Carbon, its performance would be drastically better. Now I am tempted to conclude that Objective-C is a language with bad performance, even though Cocoa seems to be awfully efficient since it can apparently draw faster than Objective-C can send messages. So, is Objective-C really a language with bad performance? What do you think about that?

    Read the article

  • Are background threads a bad idea? Why?

    - by Matt Grande
    So I've been told what I'm doing here is wrong, but I'm not sure why. I have a webpage that imports a CSV file with document numbers to perform an expensive operation on. I've put the expensive operation into a background thread to prevent it from blocking the application. Here's what I have in a nutshell. protected void ButtonUpload_Click(object sender, EventArgs e) { if (FileUploadCSV.HasFile) { string fileText; using (var sr = new StreamReader(FileUploadCSV.FileContent)) { fileText = sr.ReadToEnd(); } var documentNumbers = fileText.Split(new[] {',', '\n', '\r'}, StringSplitOptions.RemoveEmptyEntries); ThreadStart threadStart = () => AnotherClass.ExpensiveOperation(documentNumbers); var thread = new Thread(threadStart) {IsBackground = true}; thread.Start(); } } (obviously with some error checking & messages for users thrown in) So my three-fold question is: a) Is this a bad idea? b) Why is this a bad idea? c) What would you do instead?

    Read the article

  • Bad idea to force creation of Mercurial remote heads (ie. branches)?

    - by Chad Johnson
    I am developing a centralized web application, and I have a centralized Mercurial repository. Locally I created a branch in my repository hg branch my_branch I then made some changes and committed. Then when I try to push, I get abort: push creates new remote branch 'my_branch'! (did you forget to merge? use push -f to force) I've just been using push -f. Is this bad? I WANT multiple branches in my central, remote repository, as I want to 1) back up my work and 2) allow other developers to develop with me on that branch. Is it bad or something to have branches in my remote repository or something? Should I not be doing push -f (and if not, what should I do?)? Why does Joel say this in his tutorial: Occasionally I've made a change in a branch, pushed, switched to another branch, and changes I had made in that branch I switch to were mysteriously reverted to a previous version from several commits ago. Maybe this is a symptom of forcing a push?

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >