Search Results

Search found 19191 results on 768 pages for 'core location'.

Page 400/768 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • virtual host setup: can't access wordpress site without www

    - by two7s_clash
    I would like to access my site both with and without using the www. Currently it only works with. Leaving out the www just goes to a blank page. Also, wp-admin just loads a blank page too. I have set an A record for mysite.com and www.mysite.com, both pointing to my static Bitnami IP. I also have a subdomain mapped to another directory that is working just fine (conference.mysite.com and www.conference.mysite.com). I'm using a Bitnami stack on an AWS EC2 micro instance. Here is my httpd.conf: ServerRoot "/opt/bitnami/apache2" Listen 80 LoadModule authn_file_module modules/mod_authn_file.so blah blah blah.... LoadModule php5_module modules/libphp5.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] ServerName localhost:80 DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs/" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Allow from all </Directory> <Directory "/opt/bitnami/apps/wordpress1/htdocs/"> Options Indexes MultiViews +FollowSymLinks LanguagePriority en AllowOverride All Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> <FilesMatch "^\\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error_log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\"" combined LogFormat "%h %l %u %t \\"%r\\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \\"%r\\" %>s %b \\"%{Referer}i\\" \\"%{User-Agent}i\\" %I %O" combinedio </IfModule> CustomLog "logs/access_log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "/opt/bitnami/apache2/cgi-bin/" </IfModule> <Directory "/opt/bitnami/apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> AddType application/x-httpd-php .php .phtml LoadModule wsgi_module modules/mod_wsgi.so WSGIPythonHome /opt/bitnami/python ServerSignature Off ServerTokens Prod AddType application/x-httpd-php .php PHPIniDir "/opt/bitnami/php/etc" Include "/opt/bitnami/apps/phpmyadmin/conf/phpmyadmin.conf" ExtendedStatus On <Location /server-status> SetHandler server-status Order Deny,Allow Deny from all Allow from localhost </Location> Include "/opt/bitnami/apache2/conf/bitnami/httpd.conf" Include "/opt/bitnami/apps/virtualhost.conf" Here is my virtual hosts file: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin xx DocumentRoot "/opt/bitnami/apps/wordpress1/htdocs" ServerName mbird.com ServerAlias www.mbird.com ErrorLog "logs/wordpress-error_log" CustomLog "logs/wordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress1/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ### WordPress conference.mbird.com configuration ### <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/bitnami/apps/wordpress/htdocs" ServerName conference.mbird.com ServerAlias www.conference.mbird.com ErrorLog "logs/confwordpress-error_log" CustomLog "logs/confwordpress-access_log" common </VirtualHost> <Directory "/opt/bitnami/apps/wordpress/htdocs"> Options Indexes MultiViews +FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ###

    Read the article

  • How to install pip/easy_install on debian 6 for python3.2

    - by atomAltera
    I'm trying to install pip or setup tools form python 3.2 in debian 6. First case: apt-get install python3-pip...OK python3 easy_install.py webob Searching for webob Reading http://pypi.python.org/simple/webob/ Reading http://webob.org/ Reading http://pythonpaste.org/webob/ Best match: WebOb 1.2.2 Downloading http://pypi.python.org/packages/source/W/WebOb/WebOb-1.2.2.zip#md5=de0f371b46554709ce5b93c088a11cae Processing WebOb-1.2.2.zip Traceback (most recent call last): File "easy_install.py", line 5, in <module> main() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1931, in main with_ei_usage(lambda: File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1912, in with_ei_usage return f() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1935, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 368, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 608, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 638, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 799, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 67, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 154, in unpack_zipfile data = z.read(info.filename) File "/usr/local/lib/python3.2/zipfile.py", line 891, in read with self.open(name, "r", pwd) as fp: File "/usr/local/lib/python3.2/zipfile.py", line 980, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 489, in __init__ self._decompressor = zlib.decompressobj(-15) AttributeError: 'NoneType' object has no attribute 'decompressobj' Second case: from http://pypi.python.org/pypi/distribute#installation-instructions python3 distribute_setup.py Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmpv6iei2 Traceback (most recent call last): File "distribute_setup.py", line 515, in <module> main(sys.argv[1:]) File "distribute_setup.py", line 511, in main _install(tarball, _build_install_args(argv)) File "distribute_setup.py", line 73, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.2/tarfile.py", line 1746, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully Third case: from http://pypi.python.org/pypi/distribute#installation-instructions tar -xzvf distribute-0.6.28.tar.gz cd distribute-0.6.28 python3 setup.py install Before install bootstrap. Scanning installed packages No setuptools distribution found running install running bdist_egg running egg_info writing distribute.egg-info/PKG-INFO writing top-level names to distribute.egg-info/top_level.txt writing dependency_links to distribute.egg-info/dependency_links.txt writing entry points to distribute.egg-info/entry_points.txt reading manifest file 'distribute.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'distribute.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying distribute.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating 'dist/distribute-0.6.28-py3.2.egg' and adding 'build/bdist.linux-x86_64/egg' to it Traceback (most recent call last): File "setup.py", line 220, in <module> scripts = scripts, File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/install.py", line 73, in run self.do_egg_install() File "build/src/setuptools/command/install.py", line 93, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.2/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/bdist_egg.py", line 241, in run dry_run=self.dry_run, mode=self.gen_header()) File "build/src/setuptools/command/bdist_egg.py", line 542, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python3.2/zipfile.py", line 689, in __init__ "Compression requires the (missing) zlib module") RuntimeError: Compression requires the (missing) zlib module zlib1g-dev installed Help me please

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • Windows 7 sometimes boots in VGA mode

    - by TuxRug
    I have an Asus G50VT-x5 laptop with nVidia GeForce9800M-GS graphics. Normally, Windows boots normally, but about 20% of the time (rough estimate), it will boot with the fallback VGA driver, maxing out at 800x600 with no Aero. I've checked the system logs and there is nothing indicating an error loading the nVidia driver. It even specifies in the logs that the Nvidia Display Driver service started successfully, even though it has booted in safe graphics mode. This has been happening for a while, but it's happening a little more often now than it was before. Since the first time my system exhibited this behavior, I have updated my graphics driver a handful of times. I used System Information for Windows to check for problems there, but the only thing that stood out was the following: Core Temperature 4486449 °C (8075639 °F) Shaders Temperature 1171513530 °C (2108724330 °F) I know this reading is incorrect, because my laptop is nowhere near the surface of the sun and my desk has not burst into flames. When it's opererating normally, I get a sane reading like [Core Temperature 58 °C (136 °F)] with no Shaders Temperature listed. All I have to do to resolve the issue is reboot. I have seen no stability issues with the graphics or anything else. A long time ago, I had an issue with this computer where my framerate would suddenly drop during a 3D game from 40fps to <1fps, but after looking at the temperature readout immediately after quitting a game, I removed the bottom panel and blew the dust out of the vent and heatsink. Since then I have no drops in framerate under any situation. I have uploaded a zip containing the SIW reports for when the problem is occurring and when the computer is operating normally. I don't have a paid account so it can only be downloaded 10 times, so please only download the reports if you think you can use them. If you try to download the reports and they are no longer available, please comment and I will re-upload them. If you want to look at the files, they are on Rapidshare. EDIT It happened again, and I looked a little deeper into the System logs. When this happens, there are a lot of errors about other device drivers unable to start. All of these errors are for PnP drivers. Also, my USB keyboard and mouse take a few moments before they actually start working, although this happens sometimes the first normal boot as well. I am quite sure this is related, so I am adding the pnp tag. Also, CHKDSK will not run on boot. Even if a check is scheduled or a volume is manually set as dirty, CHKDSK will be skipped entirely, not even leaving an entry in the System logs. I tried running CHKNTFS /D, which did not work. I then manually changed my HKLM\System\CurrentControlSet\Control\Session Manager BootExecute value to the default listed on Microsoft's website. That did not work either. I ended up booting to repair mode and running CHKDSK there, which found a number of minor inconsistencies on my system drive, but none on my data drive. I have no idea if this is related. Some more information for those who don't download my SIW report file: Antivirus and Firewall are ESET Smart Security I have three different virutalization programs installed: VMware Player, Windows Virtual PC, and VirtualBox. The network adapters for these show up in the log of failed device starts. EDIT 2 I tried running sfc /scannow, which reported that it found corrupted files that could not be fixed. The CBS log is extremely cryptic. I tried booting to my install disk, launching repair mode, and doing an offline sfc from there, which produced the same result.

    Read the article

  • Hide subdomain AND subdirectory using mod_rewrite?

    - by Jeremy
    I am trying to hide a subdomain and subdirectory from users. I know it may be easier to use a virtual host but will that not change direct links pointing at our site? The site currently resides at http://mail.ctrc.sk.ca/cms/ I want www.ctrc.sk.ca and ctrc.sk.ca to access this folder but still display www.ctrc.sk.ca. If that makes any sense. Here is what our current .htaccess file looks like, we are using Joomla so there already a few rules set up. Help is appreciated. # Helicon ISAPI_Rewrite configuration file # Version 3.1.0.78 ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. #Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) #RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section EDIT Yes, mail.ctrc.sk.ca/cms/ is the root directory. Currently the DNS redirects from ctrc.sk.ca and www.ctrc.sk.ca to mail.ctrc.sk.ca/cms. However when it redirects the user still sees the mail.ctrc.sk.ca/cms/ url and I want them to only see www.ctrc.sk.ca.

    Read the article

  • PHP.ini does not load

    - by Jonathan Park
    Ok this is probably just me not knowing enough about php but here it goes. I'm on Ubuntu Hardy. I have a custom compiled version of PHP which I have compiled with these parameters. ./configure --enable-soap --with-zlib --with-mysql --with-apxs2=[correct path] --with-config-file-path=[correct path] --with-mysqli --with-curlwrappers --with-curl --with-mcrypt I have used the command pecl install pecl_http to install the http.so extension. It is in the correct module directory for my php.ini. My php.ini is loading and I can change things within the ini and effect php. I have included the extension=http.so line in my php.ini. That worked fine. Until I added these compilation options in order to add imap --with-openssl --with-kerberos --with-imap --with-imap-ssl Which failed because I needed the c-client library which I fixed by apt-get install libc-client-dev After which php compiles fine and I have working imap support, woo. HOWEVER, now all my calls to HttpRequest which is part of the pecl_http extention in http.so result in Fatal error: Class 'HttpRequest' not found errors. I figure the http.so module is no longer loading for one reason or another but I cannot find any errors showing the reason. You might say "Have you tried undoing the new imap setup?" To which I will answer. Yes I have. I directly undid all my config changes and uninstalled the c-client library and I still can't get it to work. I thought that's weird... I have made no changes that would have resulted in this issue. After looking at that I have also discovered that not only is the http extension no longer loading but all my extensions loaded via php.ini are no longer loading. Can someone at least give me some further debugging steps? So far I have tried enabling all errors including startup errors in my php.ini which works for other errors, but I'm not seeing any startup errors either on command line or via apache. And yet again the php.ini appears to be being parsed given that if I run php_info() I get settings that are in the php.ini. Edit it appears that only some of the php.ini settings are being listened to. Is there a way to test my php.ini? Edit Edit It appears I am mistaken again and the php.ini is not being loaded at all any longer. However, If I run php_info() I get that it's looking for my php.ini in the correct location. Edit Edit Edit My config is at the config file path location below but it says no config file loaded. WTF Permission issue? It is currently 644 so everyone should be able to read it if not write it. I tried making it 777 and that didn't work. Configuration File (php.ini) Path /etc/php.ini Loaded Configuration File (none) Edit Edit Edit Edit By loading the ini on the command line using the -c command I am able to run my files and using -m shows that my modules load So nothing is wrong with the php.ini

    Read the article

  • nginx : backend https, proxy_pass shows ip

    - by Vulpo
    I am using nginx as a reverse proxy listening at port 80 (http). I am using proxy_pass to forward requests to backend http and https servers. Everything works fine for my http server but when I try to reach the https server through nginx reverse proxy the ip of the https server is shown in the client's web browser. I want the uri of the nginx server to be shown instead of the https backend server's ip (once again, this works fine with the http server but not for the https server). See this post on the forum Here is my configuration file : server { listen 80; server_name domain1.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpServer:port/; } } server { listen 80; server_name domain2.com; access_log off; root /var/www; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } location / { proxy_pass http://ipOfHttpsServer:port/; proxy_set_header X_FORWARDED_PROTO https; #proxy_set_header Host $http_host; } } When I try the "proxy_set_header Host $http_host" directive and "proxy_set_header Host $host" the web page can't be reached (page not found). But when I comment it, the ip of the https server is shown in the browser (which is bad). Does anyone have an idea ? My other configs files are : proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; user www-data; worker_processes 2; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; server_names_hash_bucket_size 64; sendfile off; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_min_length 0; gzip_types text/plain text/html text/css image/x-icon application/x-javascript; gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Thanks for your help !

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Can not access network computers anymore

    - by Johny Skovdal
    Last Thursday (03/05/12) I got a new computer to be able to work from home. I plugged it, by cable, into the company network and installed most of the software needed for me to do so, by accessing a share on my stationary computer at work. I had no issues here what so ever, and everything just worked. Yesterday evening I tried accessing the company network trough Windows VPN, and while I was able to connect to the network, I was unable to connect to any computers on the network. I did, however, get an error when connecting, but I can't seem to get the error again, to get the details of the error message. Today I am sitting on the company network again, and now I can not access anything on the network like I could last Thursday, though I can ping all the computers I am attempting to access. Here is a list of details that might help in troubleshooting this issue (updated): List of observations / actions My computer is identical to another computer that has no issues. It is not on the domain but rather on the default workgroup, but this was not an issue last Thursday, so I am assuming it still is not. I am able to access my e-mail on the exchange server. I can connect to our TFS server from Visual Studio but not from Explorer. I can also connect to Database Servers and Remote Desktop. I can see several computers when browsing network computers, but I am unable to connect to any of them. When trying to connect to a computer I am consistently met with the error code "0x80070035" (network path not found). I also get the 0x80070035 error when double clicking the target computer from the Network UI. I am not met with a login dialog when trying to access a computer, as I should, since I am not on the domain. (I did login to both Exchange, Remote Desktop and TFS though) Between Thursday where it worked and Sunday evening where it did not, I have installed quite a few security updates, plus various tools etc. that I need for programming. I have tried accessing by computer name and ip and neither of them work. I can ping by computer name. I have deleted all (1 entry) stored network credentials. I am able to access my computer from the target computer. Client and Server can see each other on the network = Network Discovery is enabled. I am using the network profile "Work". When accessing the network through VPN, I am unable to get anything to work using computernames, but all of the above applies when using IP adresses instead of computername. I run Windows 7 Home Premium on my computer. Using powershell attempting to access a share I get the following error (ComputerName and ShareName being correct values of course): PS C:\Users\MyUser> cd \\ComputerName\ShareName Set-Location : Cannot find path '\\ComputerName\ShareName' because it does not exist. At line:1 char:3 + cd <<<< \\ComputerName\ShareName + CategoryInfo : ObjectNotFound: (\\ComputerName\ShareName:String) [Set-Location], ItemNotFoundException + FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.SetLocationCommand However, ping'ing the same machine (ping ComputerName) from powershell I get response immediately. (As mentioned in the list of observations/actions, I tried the above with the IP address again on VPN, to get the same result) Conclusion So to sum up, pretty much the only thing I can not do, is access the other computers through browsing (explorer.exe, powershell, map networkdrive, etc.), which means that I am pretty much down to, that it is unable to resolve the path somehow, when trying to connect to other computers trough browsing, though the path gets resolved perfectly using all kinds of other services. Any recommendations as to what I can try next to resolve the issue? :)

    Read the article

  • Tomcat and ASP site under IIS6 with SSL

    - by Rafe
    I've been working on migrating our companies' website from it's original server to a new one and am having two different but possibly related problems. The box this is sitting on is a Windows 2003 server x64 running IIS 6. The Tomcat version is 5.5.x as it was the version the original deployment was built on. There are two other sites on the server one in plain HTML, another in PHP and the one I am trying to migrate is a combination of Java and ASP (the introductory/sign in pages being Java as well as many reports used for our clients and the administration pages being in ASP) First of all I can only access the site if I enter the ip followed by :8080 (xxx.xxx.xxx.xxx:8080). The original setup had an index.html file in the root of the site with a bit of javascript in the header that pointed the site to 'www.mysite.com/app/public' but if I try going directly to the site without the 8080 I get a 'page not found error' and the javascript redirector causes the same problem because it doesn't add the 8080 into the URL even though on the original site the 8080 wasn't present so I don't understand why it would need it now. The js redirect looks like this: <script language="JavaScript"> <!-- location.href = "/app/public/" location.replace("/app/public/"); //--> </script> When setting the site up I used the command line to unbind IIS from all of the ip's on the system (there are 12 ip's on this box) because I was led to believe Tomcat wanted to use localhost which wasn't accessible. I'm not sure if this was the right thing to do but I'm throwing it in for the sake of completeness. And actually, at this point trying to go to localhost from the server itself throws up a 'could not connect to localhost' error. If I go to localhost:8080 I get the tomcat welcome page. If I do localhost:8080/app/public I get the intro page to our website. So I'm not sure what I'm even looking at in this case, that is what the proper behavior should be. The second part of the problem is that if I do go to either the ip or localhost such as above (localhost:8080/app/public) and click on our login link it is supposed to transfer me to our login page yet instead I receive a 'could not connect' error and the url has changed to localhost:8443/app/secure. From my research I see that port 8443 is Tomcats SSL port and the server.xml alludes to it as follows: <Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> I have an SSL certificate assigned to the site via IIS and was under the impression that by default Tomcat allowed IIS to handle secure connections but apparently something is munged because it's not working. There is another section in the server.xml that reads like this: <Connector port="8009" enableLookups="false" redirectPort="443" protocol="AJP/1.3" /> Which I'm not sure what it is for although port 443 is the SSL port that IIS uses so I'm confused as to what this is supposed to be doing. Another question I have is when does the isap_redirector actually come into play? How does it know when to try and serve pages through Tomcat and when not to? I've hunted around the 'net for an answer and have yet to find a clear dialogue on the subject. Anyone have any pointers as to where to look for a solution to all of this?

    Read the article

  • Parallel processing slower than sequential?

    - by zebediah49
    EDIT: For anyone who stumbles upon this in the future: Imagemagick uses a MP library. It's faster to use available cores if they're around, but if you have parallel jobs, it's unhelpful. Do one of the following: do your jobs serially (with Imagemagick in parallel mode) set MAGICK_THREAD_LIMIT=1 for your invocation of the imagemagick binary in question. By making Imagemagick use only one thread, it slows down by 20-30% in my test cases, but meant I could run one job per core without issues, for a significant net increase in performance. Original question: While converting some images using ImageMagick, I noticed a somewhat strange effect. Using xargs was significantly slower than a standard for loop. Since xargs limited to a single process should act like a for loop, I tested that, and found it to be about the same. Thus, we have this demonstration. Quad core (AMD Athalon X4, 2.6GHz) Working entirely on a tempfs (16g ram total; no swap) No other major loads Results: /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 0m3.784s user 0m2.240s sys 0m0.230s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 0m9.097s user 0m28.020s sys 0m0.910s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 0m9.844s user 0m33.200s sys 0m1.270s Can anyone think of a reason why running two instances of this program takes more than twice as long in real time, and more than ten times as long in processor time to complete the same task? After that initial hit, more processes do not seem to have as significant of an effect. I thought it might have to do with disk seeking, so I did that test entirely in ram. Could it have something to do with how Convert works, and having more than one copy at once means it cannot use processor cache as efficiently or something? EDIT: When done with 1000x 769KB files, performance is as expected. Interesting. /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.679s user 5m6.980s sys 0m6.340s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 1 convert -auto-level real 3m37.152s user 5m6.140s sys 0m6.530s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 2 convert -auto-level real 2m7.578s user 5m35.410s sys 0m6.050s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 4 convert -auto-level real 1m36.959s user 5m48.900s sys 0m6.350s /media/ramdisk/img$ time for f in *.bmp; do echo $f ${f%bmp}png; done | xargs -n 2 -P 10 convert -auto-level real 1m36.392s user 5m54.840s sys 0m5.650s

    Read the article

  • Configuring nginx server to handle requests from multiple domains

    - by KillABug
    Use Case:- I am working on a web application which allows to create HTML templates and publish them on amazon S3.Now to publish the websites I use nginx as a proxy server. What the proxy server does is,when a user enters the website URL,I want to identify how to check if the request comes from my application i.e app.mysite.com(This won't change) and route it to apache for regular access,if its coming from some other domain like a regular URL www.mysite.com(This needs to be handled dynamically.Can be random) it goes to the S3 bucket that hosts the template. My current configuration is: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; charset utf-8; keepalive_timeout 65; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay off; Default Server Block to catch undefined host names server { listen 80; server_name app.mysite.com; access_log off; error_log off; location / { proxy_pass http://127.0.0.1:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; client_max_body_size 10m; client_body_buffer_size 128k; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; } } } Load all the sites include /etc/nginx/conf.d/*.conf; Updates as I was not clear enough :- My question is how can I handle both the domains in the config file.My nginx is a proxy server on port 80 on an EC2 instance.This also hosts my application that runs on apache on a differnet port.So any request coming for my application will come from a domain app.mysite.com and I also want to proxy the hosted templates on S3 which are inside a bucket say sites.mysite.com/coolsite.com/index.html.So if someone hits coolsite.com I want to proxy it to the folder sites.mysite.com/coolsite.com/index.html and not to app.syartee.com.Hope I am clear The other server block: # Server for S3 server { # Listen on port 80 for all IPs associated with your machine listen 80; # Catch all other server names server_name _; //I want it to handle other domains then app.mysite.com # This code gets the host without www. in front and places it inside # the $host_without_www variable # If someone requests www.coolsite.com, then $host_without_www will have the value coolsite.com set $host_without_www $host; if ($host ~* www\.(.*)) { set $host_without_www $1; } location / { # This code rewrites the original request, and adds the host without www in front # E.g. if someone requests # /directory/file.ext?param=value # from the coolsite.com site the request is rewritten to # /coolsite.com/directory/file.ext?param=value set $foo 'http://sites.mysite.com'; # echo "$foo"; rewrite ^(.*)$ $foo/$host_without_www$1 break; # The rewritten request is passed to S3 proxy_pass http://sites.mysite.com; include /etc/nginx/proxy_params; } } Also I understand I will have to make the DNS changes in the cname of the domain.I guess I will have to add app.mysite.com under the CNAME of the template domain name?Please correct if wrong. Thank you for your time

    Read the article

  • Issue in nginx proxying to apache

    - by Luis Masuelli
    My current nginx configuration is as follows: specific configuration for (currently two) domains: server { listen 443 ssl; server_name studiotv.service.tebusco.lan phpmyadmin.service.tebusco.lan; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { proxy_pass http://127.0.0.1:8180; proxy_set_header Host $http_host:8180; } } default configuration for unmatched ssl connections: server { listen 443 default ssl; ssl_certificate /home/administrador/nginx-confs/ssl/service.tebusco.lan.crt; ssl_certificate_key /home/administrador/nginx-confs/ssl/service.tebusco.lan.key; location / { return 403; } } http configuration: server { listen 80; rewrite ^ https://$host$request_uri? permanent; } The intention is clear: Redirect http traffic to https. Proxy each https:// call from phpmyadmin.service.tebusco.lan and studiotv.service.tebusco.lan to apache2. This includes passing a host header, which is detected. Each unmatched ssl connection must return a 403 in nginx. Does not even reach apache2. In the apache2 side of the life, I have a default site, and a non-default site which will match studiotv.service.tebusco.lan: 000-default.conf file (available and enabled): <VirtualHost 127.0.0.1:8180> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. ServerName localhost ServerAdmin webmaster@localhost DocumentRoot /var/www/html <Directory /var/www/html> Order deny,allow Require all granted </Directory> </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet studiotv.conf file (available and enabled): <VirtualHost *:8180> ServerName studiotv.service.tebusco.lan ServerAdmin [email protected] DocumentRoot /var/www/studiotv <Directory /var/www/studiotv/> Options -Indexes +FollowSymLinks AllowOverride None Order deny,allow Allow from all Require all granted </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn # No usamos ${APACHE_LOG_DIR} sino en su lugar /var/log/<host> ErrorLog /var/log/apache2/studiotv/error.log CustomLog /var/log/apache2/studiotv/access.log combined </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet However, when I hit the browser with http://studiotv.service.tebusco.lan, the default php page is shown instead. Question: What am I missing? (apache 2.4.7, nginx 1.6.0, ubuntu server 14.04).

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • What networking hardware do I need in this situation (Fairpoint [ISP] "E-DIA" connection)?

    - by Tegeril
    Right away you'd probably want to say, "Well just ask Fairpoint." I've done that, a number of times in as many different ways I can phrase it and just keep hitting a brick wall where they will not commit to giving any useful information and instead recommend contracting an outside firm and spending a pile of money. Anyway... I'm trying to help a family member out with an office connection that is being setup. I've managed to scrape tiny details here and there from our discussions with the ISP (Fairpoint in Maine) about what is going to be done and what is going to be needed. This is the connection that is being setup: http://www.fairpoint.com/enterprise/vantagepoint/e-dia/index.jsp Information I have been given: Via this connection I can get IPs across different C blocks if that were necessary (it is not) Fairpoint is bringing hardware with them that they claim simply does the conversion from whatever line is coming in the building to ethernet, they have referred to this as the "Fairpoint Netvanta" which I know suggests a line of products that I have looked up, but some (most? all?) of those seems to handle all the routing that I saw. Fairpoint says that I need to bring my own router to sit behind their device. They have literally declined to even suggest products that have worked for other clients in the past and fall back on "any business router works, not a home router." That alone makes my head spin. Detail and clarity hit a brick wall from there. At one moment I got them to cough up that the router I provide needs to be able to do VPN tunneling but they typically fall back to "not a home router" and I was even given "just a business router, Cisco or something, it'll be $500-$1000". Now I know that VPN tunneling routers exist well below that price point and since this connection is going to one machine, possibly two only via ethernet, my desire to purchase networking hardware that over-delivers what I need is not very high. They are literally setting all this up, have provided no configuration details for after they finish, and expect me to just plunk a $500+ router behind it and cross my fingers or contract out to a third party company. If there were other options available for the location, I would have dropped them in a second, but there aren't. The device that is connected requires a static IP and I'm honestly a bit hazy on the necessity of an additional router behind their device and generally a bit over my head. I presume that the router needs to be able to serve external static IPs to its clients, but I really don't know what is going to show up when they come to do the install. This was originally going to be run via an ADSL bridge modem with a range of static IPs (which is easy and is currently setup properly) but the location is too far from the telco to get speeds that we really want for upload and this is also a connection that needs high availability. Any suggestions would be greatly appreciated (I see a number of options in the Cisco Small Business line and other competitors that aren't going to break the bank…), especially if you've worked with Fairpoint before! Thanks for reading my wall of text.

    Read the article

  • I am getting a 400 Bad Request error when using Nginx and PHP-FPM, why?

    - by Bob
    I am trying to run a website (that requires PHP - it technically doesn't require MySQL at this time, but it may sometime in the near future as I continue developing it, so I went ahead and installed that as well) using nginx 1.2.4 and PHP-FPM 5.3.3 on Ubuntu 12.04.1 LTS. As far as I know, I haven't done anything wrong, but clearly something is not quite right - I seem to be getting a 400 Bad Request error whenever I try to browse to my website. I've been mostly following one guide, and I've done more or less everything it recommends, except for not setting up PHP-FPM to use a Unix Socket and I used service as opposed to /etc/init.d/ when starting/stopping nginx, PHP, and MySQL. Anyways, here are my relevant configuration files (I have only censored personal/sensitive details, like my domain name - which contains my real name): /etc/nginx/nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 15; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/sites-enabled/subdomain.mydomain.net server { listen 80; # listen for IPv4 listen [::]:80; # listen for IPv6 server_name www.subdomain.mydomain.net subdomain.mydomain.net; access_log /srv/www/subdomain.mydomain.net/logs/access.log; error_log /srv/www/subdomain.mydomain.net/logs/error.log; location / { root /srv/www/subdomain.mydomain.net/public; index index.php; } location ~ \.php$ { try_files $uri =400; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/www/subdomain.mydomain.net/public$fastcgi_script_name; } } All the directories listed in the configuration files above are correct on my server (to the extent of my knowledge). I have not included /etc/php5/fpm/pool.d/www.conf or /etc/php5/fpm/php.ini in this post as they're rather long, but I have posted them on Pastebin: http://pastebin.com/ensErJD8 and http://pastebin.com/T23dt7vM, respectively. Although, the only thing I've changed in either of the two files was in php.ini, where I set expose_php to off so as to hide the .php file extension from users. What can I do to resolve my issue? Please let me know if I need to supply any additional details.

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • Scan a Windows PC for Viruses from a Ubuntu Live CD

    - by Trevor Bekolay
    Getting a virus is bad. Getting a virus that causes your computer to crash when you reboot is even worse. We’ll show you how to clean viruses from your computer even if you can’t boot into Windows by using a virus scanner in a Ubuntu Live CD. There are a number of virus scanners available for Ubuntu, but we’ve found that avast! is the best choice, with great detection rates and usability. Unfortunately, avast! does not have a proper 64-bit version, and forcing the install does not work properly. If you want to use avast! to scan for viruses, then ensure that you have a 32-bit Ubuntu Live CD. If you currently have a 64-bit Ubuntu Live CD on a bootable flash drive, it does not take long to wipe your flash drive and go through our guide again and select normal (32-bit) Ubuntu 9.10 instead of the x64 edition. For the purposes of fixing your Windows installation, the 64-bit Live CD will not provide any benefits. Once Ubuntu 9.10 boots up, open up Firefox by clicking on its icon in the top panel. Navigate to http://www.avast.com/linux-home-edition. Click on the Download tab, and then click on the link to download the DEB package. Save it to the default location. While avast! is downloading, click on the link to the registration form on the download page. Fill in the registration form if you do not already have a trial license for avast!. By the time you’ve filled out the registration form, avast! will hopefully be finished downloading. Open a terminal window by clicking on Applications in the top-left corner of the screen, then expanding the Accessories menu and clicking on Terminal. In the terminal window, type in the following commands, pressing enter after each line. cd Downloadssudo dpkg –i avast* This will install avast! on the live Ubuntu environment. To ensure that you can use the latest virus database, while still in the terminal window, type in the following command: sudo sysctl –w kernel.shmmax=128000000 Now we’re ready to open avast!. Click on Applications on the top-left corner of the screen, expand the Accessories folder, and click on the new avast! Antivirus item. You will first be greeted with a window that asks for your license key. Hopefully you’ve received it in your email by now; open the email that avast! sends you, copy the license key, and paste it in the Registration window. avast! Antivirus will open. You’ll notice that the virus database is outdated. Click on the Update database button and avast! will start downloading the latest virus database. To scan your Windows hard drive, you will need to “mount” it. While the virus database is downloading, click on Places on the top-left of your screen, and click on your Windows hard drive, if you can tell which one it is by its size. If you can’t tell which is the correct hard drive, then click on Computer and check out each hard drive until you find the right one. When you find it, make a note of the drive’s label, which appears in the menu bar of the file browser. Also note that your hard drive will now appear on your desktop. By now, your virus database should be updated. At the time this article was written, the most recent version was 100404-0. In the main avast! window, click on the radio button next to Selected folders and then click on the “+” button to the right of the list box. It will open up a dialog box to browse to a location. To find your Windows hard drive, click on the “>” next to the computer icon. In the expanded list, find the folder labelled “media” and click on the “>” next to it to expand it. In this list, you should be able to find the label that corresponds to your Windows hard drive. If you want to scan a certain folder, then you can go further into this hierarchy and select that folder. However, we will scan the entire hard drive, so we’ll just press OK. Click on Start scan and avast! will start scanning your hard drive. If a virus is found, you’ll be prompted to select an action. If you know that the file is a virus, then you can Delete it, but there is the possibility of false positives, so you can also choose Move to chest to quarantine it. When avast! is done scanning, it will summarize what it found on your hard drive. You can take different actions on those files at this time by right-clicking on them and selecting the appropriate action. When you’re done, click Close. Your Windows PC is now free of viruses, in the eyes of avast!. Reboot your computer and with any luck it will now boot up! Alternatives to avast! If avast! and a liberal amount of Googling doesn’t fix your problem, it’s possible that a different virus scanner will fix your obscure issue. Here are a list of other virus scanners available for Ubuntu that are either free or offer free trials. See their support forums for help on installing these virus scanners. Avira AntiVir Personal for Linux / Solaris Panda Antivirus for Linux Installation and usage guide from Ubuntu F-PROT Antivirus for Linux ClamAV installation and usage guide from Ubuntu NOD32 Antivirus for Linux Kaspersky Anti-Virus 2010 Bitdefender Antivirus for Unices Conclusion Running avast! from a Ubuntu Live CD can clean the vast majority of viruses from your Windows PC. This is another reason to always have a Ubuntu Live CD ready just in case something happens to your Windows installation! Similar Articles Productive Geek Tips Secure Computing: Windows Live OneCareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareUse the Windows Key for the "Start" Menu in Ubuntu LinuxScan Files for Viruses Before You Download With Dr.WebAsk the Readers: Share Your Tips for Defeating Viruses and Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Customize your icons in Windows 7 and Vista

    - by Matthew Guay
    Want to change out the icons on your desktop and more?  Personalizing your icons is a great way to make your PC uniquely yours,, and today we show you how to grab unique icons, and default Winnows. to be your own. Change the icon for Computer, Recycle Bin, Network, and your User folder Right-click on the desktop, and select Personalize. Now, click the “Change desktop icons” link on the left sidebar in the Personalization window. The window looks slightly different in Windows Vista, but the link is the same. Select the icon you wish to change, and click the Change Icon button.  In Windows 7, you will also notice a box to choose whether or not to allow themes to change icons, and you can uncheck it if you don’t want themes to change your icon settings. You can select one of the other included icons, or click browse to find the icon you want.  Click Ok when you are finished. Change Folder icons You can easily change the icon on most folders in Windows Vista and 7.  Simply right-click on the folder and select properties. Click the Customize tab, and then click the Change Icon button.  This will open the standard dialog to change your icon, so proceed as normal. This basically just creates a hidden desktop.ini file in the folder containing the following or similar data: [.ShellClassInfo]IconFile=%SystemRoot%\system32\SHELL32.dllIconIndex=20 You could manually create or edit the file if you choose, instead of using the dialogs. Simply create a new text file named desktop.ini with this same information, or edit the existing one.  Change the IconFile line to the location of your icon. If you are pointing to a .ico file you should change the IconIndex line to 0 instead. Note that this isn’t available for all folders, for instance you can’t use this to change the icon for the Windows folder.   In Windows 7, please note that you cannot change the icon of folder inside a library.  So if you are browsing your Documents library and would like to change an icon in that folder, right-click on it and select Open folder location.  Now you can change the icon as above. And if you would like to change a Library’s icon itself, then check out this tutorial: Change Your Windows 7 Library Icons the Easy Way Change the icon of any file type Want to make you files easier to tell apart?  Check out our tutorial on how to simply do this: Change a File Type’s Icon in Windows 7 Change the icon of any Application Shortcut To change the icon of a shortcut on your desktop, start menu, or in Explorer, simply right-click on the icon and select Properties. In the Shortcut tab, click the Change Icon button. Now choose one of the other available icons or click browse to find the icon you want. Change Icons of Running Programs in the Windows 7 taskbar If your computer is running Windows 7, you can customize the icon of any program running in the taskbar!  This only works on applications that are running but not pinned to the taskbar, so if you want to customize a pinned icon you may want to unpin it before customizing it.  But the interesting thing about this trick is that it can customize any icon anything running in the taskbar, including things like Control Panel! Right-click or click and push up to open the jumplist on the icon, and then right-click on the program’s name and select Properties.  Here we are customizing Control Panel, but you can do this on any application icon. Now, click Change Icon as usual. Select an icon you want (We switched the Control Panel icon to the Security Shield), or click Browse to find another icon.  Click Ok when finished, and then close the application window. The next time you open the program (or Control Panel in our example), you will notice your new icon on its taskbar icon. Please note that this only works on applications that are currently running and are not pinned to the taskbar.  Strangely, if the application is pinned to the taskbar, you can still click Properties and change the icon, but the change will not show up. Change the icon on any Drive on your Computer You can easily change the icon on your internal hard drives and portable drives with the free Drive Icon Changer application.  Simply download and unzip the file (link below), and then run the application as administrator by right-clicking on the icon and selecting “Run as administrator”. Now, select the drive that you want to change the icon of, and select your desired icon file. Click Save, and Drive Icon Changer will let you know that the icon has been changed successfully. You will then need to reboot your computer to complete the changes.  Simply click Yes to reboot. Now, our Drive icon is changed from this default image: to a Laptop icon we chose! You can do this to any drive in your computer, or to removable drives such as USB flash drives.  When you change these drives icons, the new icon will appear on any computer you insert the drive into.  Also, if you wish to remove the icon change, simply run the Drive Icon Changer again and remove the icon path. Download Drive Icon Changer This application actually simply creates or edits a hidden Autorun.inf file on the top of your drive.  You can edit or create the file yourself by hand if you’d like; simply include the following information in the file, and save it in the top directory of your drive: [autorun]ICON=[path of your icon] Remove Arrow from shortcut icons Many people don’t like the arrow on the shortcut icon, and there are two easy ways to do this. If you’re running the 32 bit version of Windows Vista or 7, simply use the Vista Shortcut Overlay Remover. If your computer is running the 64 bit version of Windows Vista or 7, use the Ultimate Windows Tweaker instead.  Simply select the Additional Tweaks section, and check the “Remove arrows from Shortcut Icons.” For more info and download links check out this article: Disable Shortcut Icon Arrow Overlay in Windows 7 or Vista Closing: This gives you a lot of ways to customize almost any icon on your computer, so you can make it look just like you want it to.  Stay tuned for more great desktop customization articles from How-to Geek! Similar Articles Productive Geek Tips Change Start Menu to Use Small Icons in Windows 7 or VistaResize Icons Quickly in Windows 7 or Vista ExplorerRoundup: 16 Tweaks to Windows Vista Look & FeelRestore Missing Desktop Icons in Windows 7 or VistaClean Up Past Notification Icons in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox) Get Better Windows Search With UltraSearch Scan News With NY Times Article Skimmer

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >