Search Results

Search found 25520 results on 1021 pages for 'boost test'.

Page 319/1021 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • BI Applications 7.9.6.3 and EBS 12.1.3 Vision: Integrated Demo Environments

    - by Mike.Hallett(at)Oracle-BI&EPM
    If you need a combined BI-Applications + eBusiness Suite Applications demonstration environment, or for proof of concept work for your customers, then these versions of images on Oracle Virtual Box are now available for partners to download and use.  To get access to these images, Partners must be OPN members, specialised in OBI or BI-Apps.   This is an integrated Demo/Test Drive/POC/Self Enablement environment including two separate images (in English) representing the entire Oracle Stack – Applications, Middleware, Database, Operating System and Virtual Machine. Minimum Hardware requirements for each image to run separately 4GB RAM Minimum Hardware requirements for both images to run concurrently 8GB RAM Dual CPU 64 bit OS   BI Applications 7.9.6.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: BI Application Analytics demo data extracted from EBS 12.1.3 Vision for Financials and HR using EBS 12.1.3 Vision (image supplied) Built Integration to EBS 12.1.3 Vision image (provided). Fully functional BI Applications 7.9.6.3 software install and configuration Image can be connected to load any data from any other compatible source system. BI Apps Demo data is based on OOTB EBS Vision 12.1.3 Configured to run BI Apps data load for all other modules of EBS 12.1.3 Vision. Includes OBIEE Sample demonstration content Documented scripts for running presentations, demonstrations and Test Drives Image Size: 34GB zip, 84GB unzip.  Min Hardware 4GB RAM         EBS Vision 12.1.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: eBusiness Suite (EBS) Applications Vision 12.1.3 Standard Vision instance with all given setups, configurations and data Source system for BI Apps 7.9.6.3 Image Size: 76GB zip, 300GB unzip.  Min Hardware 4GB RAM Distribution: The Virtual Box images are posted on an external FTP server @ BI Applications 7.9.6.3 EBS12.1.3   To download, Partners need to request the current password to access the images.  To request the current ftp.oracle.com password and the password required to unzip the images, please email Marek Winiarski   Support Contact =  Marek Winiarski: Oracle Partner Solution Consultant

    Read the article

  • Connect to MS Sql 2008 on local VM

    - by Campo
    I have a test machine. Server 2008 with Hyper V MSSQL 2008 Enterprise Lets call it MACHINE A on the VM it is as well Server 2008 with another MSSQL 2008 Ent Call it VM B I setup a DB on MACHINE A then backed it up and restored following the prepare database for mirroring instructions on MSDN onto VM B. I used to be able to connect to the VM B Instance from the main test server (MACHINE A) but now I cannot for some reason. It cannot seem to find the instance at all even when I browse network databases. I can ping the VM from any computer on the network and access its shares so I know it is discoverable. Just the end of a long day maybe I am missing something here.

    Read the article

  • Circular dependency and object creation when attempting DDD

    - by Matthew
    I have a domain where an Organization has People. Organization Entity public class Organization { private readonly List<Person> _people = new List<Person>(); public Person CreatePerson(string name) { var person = new Person(organization, name); _people.Add(person); return person; } public IEnumerable<Person> People { get { return _people; } } } Person Entity public class Person { public Person(Organization organization, string name) { if (organization == null) { throw new ArgumentNullException("organization"); } Organization = organization; Name = name; } public Organization { get; private set; } public Name { get; private set; } } The rule for this relationship is that a Person must belong to exactly one Organization. The invariants I want to guarantee are: A person must have an organization this is enforced via the Person's constuctor An organization must know of its people this is why the Organization has a CreatePerson method A person must belong to only one organization this is why the organization's people list is not publicly mutable (ignoring the casting to List, maybe ToEnumerable can enforce that, not too concerned about it though) What I want out of this is that if a person is created, that the organization knows about its creation. However, the problem with the model currently is that you are able to create a person without ever adding it to the organizations collection. Here's a failing unit-test to describe my problem [Test] public void AnOrganizationMustKnowOfItsPeople() { var organization = new Organization(); var person = new Person(organization, "Steve McQueen"); CollectionAssert.Contains(organization.People, person); } What is the most idiomatic way to enforce the invariants and the circular relationship?

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Deep recursion in WHM EasyApache software update causes out of memory

    - by Ernest
    I was trying to load some modules with EasyApache in a software update (WHM) cause I need to install Magento ecommerce. I did the first EasyApache update. However, one module I needed was not loaded. I loaded later but whenever I check Tomcat 5.5 in the profile builder I get: -- Begin opt 'Tomcat' -- -- Begin dryrun test 'Checking for v5' -- -- End dryrun test 'Checking for v5' -- -- Begin step 'Checking jdk' -- Deep recursion on subroutine "Cpanel::CPAN::Digest::MD5::File::_dir" at /usr/local/cpanel/Cpanel/CPAN/Digest/MD5/File.pm line 107. Out of memory! Out of memory! *** glibc detected *** realloc(): invalid next size: 0x09741188 *** Line 107 in question in the file.pm is the third one in this snippet: if(-d $full) { $hr->{ $short } = ''; _dir($full, $hr, $base, $type, $cc) or return; //line 107 } All my client sites are down and I don't know what to do to fix this.

    Read the article

  • Which Browser is the Best to Use When Running Your Laptop on Battery Power?

    - by Asian Angel
    Squeezing the maximum amount of usage time out of your laptop battery can be challenging at times…it all depends on the software you are using. One software we are all likely to be using is a browser to keep up with our online lives… If your laptop is older, then getting the most out of your laptop’s aging battery is definitely a must. The good folks over at the 7 Tutorials blog have done a comparison test to see which browser is the gentlest on your laptop’s battery and the results may surprise you. You can view the results by visiting the link below… Had better (or worse) luck with one of the browsers tested? Then make sure to share the results with your fellow readers in the comments! Test Comparison: Which Browser Will Make Your Laptop’s Battery Last Longer? [7 Tutorials] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Software center is broken

    - by Colin
    When I started installing the Humble Indie Bundle 5 games the software center stopped working and now I get this error. Packages cannot be installed or removed, click here to repair. Which fails and gives these results. installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 255502 files and directories currently installed.) Unpacking libqtcore4:i386 (from .../libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb (--unpack): conffile './etc/xdg/Trolltech.conf' is not in sync with other instances of the same package No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb Error in function: dpkg: dependency problems prevent configuration of libqtgui4:i386: libqtgui4:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqtgui4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-sql:i386: libqt4-sql:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-sql:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of ia32-libs-multiarch:i386: ia32-libs-multiarch:i386 depends on libqt4-sql; however: Package libqt4-sql:i386 is not configured yet. ia32-libs-multiarch:i386 depends on libqtcore4; however: Package libqtcore4:i386 is not installed. ia32-libs-multiarch:i386 depends on libqtgui4; however: Package libqtgui4:i386 is not configured yet. dpkg: error processing ia32-libs-multiarch:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-declarative:i386: libqt4-declarative:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-declarative:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-declarative:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-declarative:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-svg:i386: libqt4-svg:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-svg:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-svg:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-network:i386: libqt4-network:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-network:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-sql-mysql:i386: libqt4-sql-mysql:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-sql-mysql:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-sql-mysql:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-script:i386: libqt4-script:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-script:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-dbus:i386: libqt4-dbus:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-dbus:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-opengl:i386: libqt4-opengl:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-opengl:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-opengl:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqtwebkit4:i386: libqtwebkit4:i386 depends on libqt4-network (>= 4:4.8.0~); however: Package libqt4-network:i386 is not configured yet. libqtwebkit4:i386 depends on libqtcore4 (>= 4:4.8.0~); however: Package libqtcore4:i386 is not installed. libqtwebkit4:i386 depends on libqtgui4 (>= 4:4.8.0); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqtwebkit4:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-designer:i386: libqt4-designer:i386 depends on libqt4-script (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-script:i386 is not configured yet. libqt4-designer:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-designer:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-designer:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of lonesurvivor-bin:i386: lonesurvivor-bin:i386 depends on ia32-libs-multiarch; however: Package ia32-libs-multiarch:i386 is not configured yet. dpkg: error processing lonesurvivor-bin:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of lonesurvivor: lonesurvivor depends on lonesurvivor-bin (= 1.11d-0ubuntu5); however: Package lonesurvivor-bin is not installed. Package lonesurvivor-bin:i386 is not configured yet. dpkg: error processing lonesurvivor (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-scripttools:i386: libqt4-scripttools:i386 depends on libqt4-script (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-script:i386 is not configured yet. libqt4-scripttools:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-scripttools:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-scripttools:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-qt3support:i386: libqt4-qt3support:i386 depends on libqt4-designer (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-designer:i386 is not configured yet. libqt4-qt3support:i386 depends on libqt4-network (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-network:i386 is not configured yet. libqt4-qt3support:i386 depends on libqt4-sql (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-sql:i386 is not configured yet. libqt4-qt3support:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. libqt4-qt3support:i386 depends on libqtgui4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtgui4:i386 is not configured yet. dpkg: error processing libqt4-qt3support:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-xml:i386: libqt4-xml:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-xml:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-test:i386: libqt4-test:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-test:i386 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libqt4-xmlpatterns:i386: libqt4-xmlpatterns:i386 depends on libqt4-network (= 4:4.8.1-0ubuntu4.1); however: Package libqt4-network:i386 is not configured yet. libqt4-xmlpatterns:i386 depends on libqtcore4 (= 4:4.8.1-0ubuntu4.1); however: Package libqtcore4:i386 is not installed. dpkg: error processing libqt4-xmlpatterns:i386 (--configure): dependency problems - leaving unconfigured I couldn't get the apt-get update or upgrade to work either so I shut off the repositories and updated / upgraded one at a time without any problems. But that didn't fix the Software Center. Help would be greatly appreciated. ADDED UPDATE I've tried to install aptitude using dpkg but can't. I have also tried sudo apt-get dist-upgrade sudo apt-get autoremove sudo dpkg --configure -a --force-all and the -f options. Though I believe this is where my problem is originating: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libqtcore4:i386 The following NEW packages will be installed: libqtcore4:i386 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/2,061 kB of archives. After this operation, 9,041 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 255526 files and directories currently installed.) Unpacking libqtcore4:i386 (from .../libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb (--unpack): conffile './etc/xdg/Trolltech.conf' is not in sync with other instances of the same package Errors were encountered while processing: /var/cache/apt/archives/libqtcore4_4%3a4.8.1-0ubuntu4.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I hope this helps narrow it down some. SECOND UPDATE sudo apt-get --reinstall install software-center -f The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: libqtcore4:i386 but it is not going to be installed libqt4-dbus:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-declarative:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-designer:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-network:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-opengl:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-qt3support:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-script:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-scripttools:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-sql-mysql:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-svg:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-test:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xml:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqt4-xmlpatterns:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtgui4:i386 : Depends: libqtcore4:i386 (= 4:4.8.1-0ubuntu4.1) but it is not going to be installed libqtwebkit4:i386 : Depends: libqtcore4:i386 (>= 4:4.8.0~) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). sudo apt-get -f install doesn't work either. Complete output of terminal for step 5 can be found here, http://paste.ubuntu.com/1066192/

    Read the article

  • why i failed to build rtorrent?

    - by hugemeow
    i am not root, so i have to build rtorrent from source and hope to install it in my home directory, but failed, why? [mirror@hugemeow rtorrent]$ ls AUTHORS autogen.sh ChangeLog configure.ac COPYING doc INSTALL Makefile.am NEWS rak README scripts src test [mirror@hugemeow rtorrent]$ ./autogen.sh aclocal... aclocal:configure.ac:7: warning: macro `AM_PATH_CPPUNIT' not found in library autoheader... libtoolize... using libtoolize automake... configure.ac: installing `./install-sh' configure.ac: installing `./missing' src/Makefile.am: installing `./depcomp' autoconf... configure.ac:7: error: possibly undefined macro: AM_PATH_CPPUNIT If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. though autoge failed, configure script is created. [mirror@hugemeow rtorrent]$ ls aclocal.m4 autogen.sh ChangeLog config.h.in configure COPYING doc install-sh Makefile.am missing rak scripts test AUTHORS autom4te.cache config.guess config.sub configure.ac depcomp INSTALL ltmain.sh Makefile.in NEWS README src run configure, and fail for syntax error near unexpected token `1.9.6', what's wrong? what i should do in order to build this rtorrent for my centos? [mirror@hugemeow rtorrent]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes ./configure: line 2016: syntax error near unexpected token `1.9.6' ./configure: line 2016: `AM_PATH_CPPUNIT(1.9.6)' [mirror@hugemeow rtorrent]$ git branch * master [mirror@hugemeow rtorrent]$ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/c++11 remotes/origin/master [mirror@hugemeow rtorrent]$ git tag 0.9.0 0.9.1 [mirror@hugemeow rtorrent]$ git clean -dfx Removing Makefile.in Removing aclocal.m4 Removing autom4te.cache/ Removing config.guess Removing config.h.in Removing config.log Removing config.sub Removing configure Removing depcomp Removing doc/Makefile.in Removing install-sh Removing ltmain.sh Removing missing Removing src/Makefile.in Removing src/core/Makefile.in Removing src/display/Makefile.in Removing src/input/Makefile.in Removing src/rpc/Makefile.in Removing src/ui/Makefile.in Removing src/utils/Makefile.in Removing test/Makefile.in Edit 1: details about libtool and libtools [mirror@hugemeow rtorrent]$ libtoolize --version libtoolize (GNU libtool) 1.5.22 Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [mirror@hugemeow rtorrent]$ libtool --version ltmain.sh (GNU libtool) 1.5.22 (1.1220.2.365 2005/12/18 22:14:06) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Read the article

  • Running psql in Linux Command Line

    - by Mr Shoubs
    I've just installed Postgres9 and it is up and running without any issues. There is one thing however that is confusing me: If I type /usr/local/pgsql/bin/psql test then postgres command line loads and I can use it as expected, however... If I cd /usr/local/pgsql/bin then type psql test I get the following error: The program 'psql' is currently not installed. To run 'psql' please ask your administrator to install the package 'postgresql-client-common' Does anyone know why? (please don't say install postgresql-client-common as this doesn't solve the problem)

    Read the article

  • Installing Windows 7 on a BSOD Windows Vista (ASAP)

    - by anonymous
    On a previous question i posted, i asked for help on fixing my windows vista box because it keeps going to blue screen. No one seems to have the answer, so now i want to install Windows 7. I want to know if i can install 7 without having to reformat my hard drive and having to lose all my files. I've already confirmed the hardware is working because i installed Ubuntu 9.10 on my external hard drive and it runs on my system fine. I tested the memory using vista's memory test and Ubuntu's memory test. here's the previous post: http://superuser.com/questions/125897/i-really-need-help-resolving-a-window-vista-bsod-blue-screen-crash-on-my-deskto

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Could not Upload file in network mapped drive using asp.net/vb.net

    - by Hasan
    I have tried several times to upload file remotely to a mapped network drive, but it is raising an exception: Could not find a part of the path 'X:\test\testing.wav'. I read through various internet /blog/ Microsoft help sites, but I still don't know what is wrong. Does anyone know what is causing this problem and how I can correct it? It works fine when I am uploading to a local drive as a test. It is also working When I am running the code from the development server, but if I try with published code, then it fails. :(

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • Windows Azure ASP.NET MVC 2 Role with Silverlight

    - by GeekAgilistMercenary
    I was working through some scenarios recently with Azure and Silverlight.  I immediately decided a quick walk through for setting up a Silverlight Application running in an ASP.NET MVC 2 Application would be a cool project. This walk through I have Visual Studio 2010, Silverlight 4, and the Azure SDK all installed.  If you need to download any of those go get em? now. Launch Visual Studio 2010 and start a new project.  Click on the section for cloud templates as shown below. After you name the project, the dialog for what type of Windows Azure Cloud Service Role will display.  I selected ASP.NET MVC 2 Web Role, which adds the MvcWebRole1 Project to the Cloud Service Solution. Since I selected the ASP.NET MVC 2 Project type, it immediately prompts for a unit test project.  Because I just want to get everything running first, I will probably be unit testing the Silverlight and just using the MVC Project as a host for the Silverlight for now, and because I would prefer to just add the unit test project later, I am going to select no here. Once you've created the ASP.NET MVC 2 project to host the Silverlight, then create another new project.  Select the Silverlight section under the Installed Templates in the Add New Project dialog.  Then select Silverlight Application. The next dialog that comes up will inquire about using the existing ASP.NET MVC Application I just created, which I do want it to use that so I leave it checked.  The options section however I do not want to check RIA Web Services, do not want a test page added to the project, and I want Silverlight debugging enabled so I leave that checked.  Once those options are appropriately set, just click on OK and the Silverlight Project will be added to the overall solution. The next steps now are to get the Silverlight object appropriately embedded in the web page.  First open up the Site.Master file in the ASP.NET MVC 2 Project located under the Veiws/Shared/ location.  After you open the file review the content of the <header></header> section.  In that section add another <contentplaceholder></contentplaceholder> tag as shown in the code snippet below. <head runat="server"> <title> <asp:ContentPlaceHolder ID="TitleContent" runat="server" /> </title> <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <asp:ContentPlaceHolder ID="HeaderContent" runat="server" /> </head> I usually put it toward the bottom of the header section.  It just seems the <title></title> should be on the top of the section and I like to keep it that way. Now open up the Index.aspx page under the ASP.NET MVC 2 Project located in the Views/Home/ directory.  When you open up that file add a <asp:Content><asp:Content> tag as shown in the next snippet. <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Home Page </asp:Content>   <asp:Content ID=headerContent ContentPlaceHolderID=HeaderContent runat=server>   </asp:Content>   <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. </p> </asp:Content> In that center tag, I am now going to add what is needed to appropriately embed the Silverlight object into the page.  The first thing I needed is a reference to the Silverlight.js file. <script type="text/javascript" src="Silverlight.js"></script> After that comes a bit of nitty gritty Javascript.  I create another tag (and for those in the know, this is exactly like the generated code that is dumped into the *.html page generated with any Silverlight Project if you select to "add a test page that references the application".  The complete Javascript is below. function onSilverlightError(sender, args) { var appSource = ""; if (sender != null && sender != 0) { appSource = sender.getHost().Source; }   var errorType = args.ErrorType; var iErrorCode = args.ErrorCode;   if (errorType == "ImageError" || errorType == "MediaError") { return; }   var errMsg = "Unhandled Error in Silverlight Application " + appSource + "\n";   errMsg += "Code: " + iErrorCode + " \n"; errMsg += "Category: " + errorType + " \n"; errMsg += "Message: " + args.ErrorMessage + " \n";   if (errorType == "ParserError") { errMsg += "File: " + args.xamlFile + " \n"; errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } else if (errorType == "RuntimeError") { if (args.lineNumber != 0) { errMsg += "Line: " + args.lineNumber + " \n"; errMsg += "Position: " + args.charPosition + " \n"; } errMsg += "MethodName: " + args.methodName + " \n"; }   throw new Error(errMsg); } I literally, since it seems to work fine, just use what is populated in the automatically generated page.  After getting the appropriate Javascript into place I put the actual Silverlight Object Embed code into the HTML itself.  Just so I know the positioning and for final verification when running the application I insert the embed code just below the Index.aspx page message.  As shown below. <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2> <%= Html.Encode(ViewData["Message"]) %></h2> <p> To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website"> http://asp.net/mvc</a>. </p> <div id="silverlightControlHost"> <object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"> <param name="source" value="ClientBin/CloudySilverlight.xap" /> <param name="onError" value="onSilverlightError" /> <param name="background" value="white" /> <param name="minRuntimeVersion" value="4.0.50401.0" /> <param name="autoUpgrade" value="true" /> <a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=4.0.50401.0" style="text-decoration: none"> <img src="http://go.microsoft.com/fwlink/?LinkId=161376" alt="Get Microsoft Silverlight" style="border-style: none" /> </a> </object> <iframe id="_sl_historyFrame" style="visibility: hidden; height: 0px; width: 0px; border: 0px"></iframe> </div> </asp:Content> I then open up the Silverlight Project MainPage.xaml.  Just to make it visibly obvious that the Silverlight Application is running in the page, I added a button as shown below. <UserControl x:Class="CloudySilverlight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400">   <Grid x:Name="LayoutRoot" Background="White"> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="48,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </Grid> </UserControl> Just for kicks, I added a message box that would popup, just to show executing functionality also. private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("It runs in the cloud!"); } I then executed the ASP.NET MVC 2 and could see the Silverlight Application in page.  With a quick click of the button, I got a message box.  Success! Now the next step is getting the ASP.NET MVC 2 Project and Silverlight published to the cloud.  As of Visual Studio 2010, Silverlight 4, and the latest Azure SDK, this is actually a ridiculously easy process. Navigate to the Azure Cloud Services web site. Once that is open go back in Visual Studio and right click on the cloud project and select publish. This will publish two files into a directory.  Copy that directory so you can easily paste it into the Azure Cloud Services web site.  You'll have to click on the application role in the cloud (I will have another blog entry soon about where, how, and best practices in the cloud). In the text boxes shown, select the application package file and the configuration file and place them in the appropriate text boxes.  This is the part were it comes in handy to have copied the directory path of the file location.  That way when you click on browser you can just paste that in, then hit enter.  The two files will be listed and you can select the appropriate file. Once that is done, name the service deployment.  Then click on publish.  After a minute or so you will see the following screen. Now click on run.  Once the MvcWebRole1 goes green (the little light symbol to the left of the status) click on the Web Site URL.  Be patient during this process too, it could take a minute or two.  The Silverlight application should again come up just like you ran it on your local machine. Once staging is up and running, click on the circular icon with two arrows to move staging to production.  Once you are done make sure the green light is again go for the production deploy, then click on the Web Site URL to verify the site is working.  At this point I had a successful development, staging, and production deployment. Thanks for reading, hope this was helpful.  I have more Windows Azure and other cloud related material coming, so stay tuned. Original Entry

    Read the article

  • ubuntu 12.04 server and tftp access violation issue on put command

    - by SMYERS
    I installed tftp as per this document: http://icesquare.com/wordpress/solvedtftp-error-code-2-access-violation/ I followed this to the letter 3 times and every time I put a file I get: root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation tftp root@CiscoCFG:~# tftp localhost tftp put test Error code 2: Access violation If I touch the file name chmod 777 the file then do a put it works perfectly fine. My config is as follows: service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /svr/tftp disable = no } the directory /svr/tftp permissions are 777: drwxrwxrwx 3 nobody nobody 4096 Nov 14 10:32 svr This thing should have full permissions as would anyone who wanted to write or read from that directory. I see nothing in the logs im really stumped on this. If the file is already in the directory I can read it all day long, I just cant make NEW files, can not put them, but I can do get's, I can only put to an existing file with permissions @777. Thanks

    Read the article

  • Choosing the right language for the job

    - by Ampt
    I'm currently working for a company on the engineering team of about 5-6 people and have been given the job of heading up the redesign of an embedded system tester. We've decided the general requirements and attributes that would be desirable in the system, and now I have to decide on a language to use for the system, or at the very least come up with a list of languages with pros and cons to present to the team. The general idea of the project is that we currently have a tester written in c++, which was never designed to be a tester, but instead has evolved to be such over the course of 3-4 years due to need. Writing tests for a new product requires modifying the 'framework' and writing code that is completely non-human readable or intuitive due to the way the system was originally designed. Now, we've decided that the time to modify this tester for each new product that we want to test has become too high and want to partially re-write the system so that we can program the actual tests in a scripting language that would then use the modified c++ framework on the back end to test the actual systems. The c++ framework would be responsible for doing all the actual work and the scripting language would just integrate with that to tell the framework what to do. Never having programmed in a scripting language (we program embedded systems), I've run into a wall where I have no experience with any of the languages that we could possibly use, but must somehow give pros and cons of each language so that we can choose the best one for the job. Currently my short list of possibilities includes: Python TCL Lua Perl My question is this: How can a person evaluate a language that he/she has never used before? What criteria are good indicators for a languages potential usability on a project? While helpful suggestions for my particular case are appreciated, I feel that this is a good skill to possess and would like to be able to apply this to many different projects if at all possible

    Read the article

  • Benchmark an SSD

    - by Taylor Huston
    What is the best way to test/benchmark an SSD to make sure it's doing it's job. I invested in an SSD, have my OS on it, want to make sure I am getting my money's worth. I have heard some people making claims along the lines of: "I've had my SSD for X months and my read/write speeds have dropped Y%'. What is the best way to test for things like that (and what are good numbers to look for)? For reference I have a Samsung 830 128gb.

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • Symlink - Permission Denied

    - by John Smith
    I'm facing an interesting problem with plenty of Permission Denied outputs when using SymLinks Linux: Slackware 13.1 Directory with Symlink: root@Tower:/var/lib# ls -lah drwxr-xr-x 8 root root 0 2012-12-02 20:09 ./ drwxr-xr-x 15 root root 0 2012-12-01 21:06 ../ lrwxrwxrwx 1 ntop ntop 21 2012-12-02 20:09 ntop - /mnt/user/media/ntop6/ Symlinked Directory: root@Tower:/mnt/user/media# ls -lah drwxrwx--- 1 nobody users 1.4K 2012-12-02 19:28 ./ drwxrwx--- 1 nobody users 128 2012-11-18 16:06 ../ drwxrwxrwx 1 ntop ntop 320 2012-12-02 20:22 ntop6/ What I have done: I have used chown -h ntop:ntop on the ntop directory in /var/lib Just to be sure, I have chmod 777 to both directories Permission denied actions: root@Tower:/var/lib# sudo -u ntop mkdir /var/lib/ntop/test mkdir: cannot create directory `/var/lib/ntop/test': Permission denied Any ideas?

    Read the article

  • CRM On Demand Performance Tips - Live Web Session on April 20, 2010

    - by Cheryl
    The CRM On Demand Customer Care specialists have another live Web session coming up - this one is about performance - issues, tips, and considerations. This is a part of their Web series, where they pick topics that they hear a lot of questions or concerns about from customers and run live (and free) 1-hour Web sessions about them. Here are the details for this event: Event Title: CRM On Demand Performance Brandon (Hank) Henrie will present some of the top CRM On Demand performance questions and issues that customers raise and some tips and tricks that you can use to avoid them. He will point out good resources that can help and tips for logging performance-related service requests, when all else fails. Date: April 20, 2010 Time: 10:00 am (UTC-07:00 Arizona) How to join: 1. Dial 1-866-682-4770 to access the conference line. 2. Enter the conference code - 6241996 and press # 3. Follow the instructions to record your name and press # 4. Enter the meeting passcode - 1212 and press # 5. Follow the instructions below to join the web portion of the conference. The Web Conference Go to the Oracle Web Conference site: https://strtc.oracle.com Prior to the event: Click the New User button then run the New User Test. (If you have difficulties installing the web conference software try downloading the conference software from the test status window and installing manually.) To join the event: 1. Enter the conference information In the Join Conference box: Conference ID: 6566623 Your Name 2. Click the Join Conference button. Watch for announcements of future sessions on different topics. And, let us know what you think!

    Read the article

  • Setting user's group and umask has no effect

    - by Andrew Vit
    I'm trying to allow my "deploy" user to have access to files created by www-data: I added "deploy" to the www-data group. I set umask to 002. When I run the following commands, I'm not seeing the result I expect: deploy@ubuntu-lucid-32-generic:/var/www$ groups www-data adm dialout cdrom plugdev lpadmin sambashare admin deploy sysadmin deploy@ubuntu-lucid-32-generic:/var/www$ newgrp www-data deploy@ubuntu-lucid-32-generic:/var/www$ umask 0002 deploy@ubuntu-lucid-32-generic:/var/www$ mkdir test deploy@ubuntu-lucid-32-generic:/var/www$ ls -la test total 0 drwxr-xr-x 1 deploy deploy 68 Nov 7 20:37 . drwxr-xr-x 1 deploy deploy 476 Nov 7 20:37 .. I see that: The folder doesn't belong to the www-data group. The folder permissions don't have group-write (775). Note that the /var/www directory is owned by the deploy user: drwxr-xr-x 1 deploy deploy 510 Nov 7 20:45 . How can I give www-data selective access to directories? Or, how to share the /var/www directory with my deploy user: I don't care who owns it, as long as I can write to it, and so can www-data. (Ideally I would set up a directory with SGID access for www-data.)

    Read the article

  • How do I configure IIS so my Web.config is determined by URL?

    - by Scott Stafford
    I am running a test rig with IIS6 serving an ASP.NET (and Sharepoint) web site. We have several clients, and so we have custom root Web.config files for each client. For this test rig, I want to just serve straight from the Trunk of our source control. However, I'd like to be able to select different root Web.config files based on the URL (or port or whatever) I use to access the site, so I can just use one checkout of the source and run all the sites with their appropriate settings. Is this possible?

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >