Search Results

Search found 45505 results on 1821 pages for 'change directory'.

Page 463/1821 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Chrome Rewrite of Host: in HTML GET

    - by user912679
    At some point in the past I had a plugin for firefox that rewrites the HTML headers being sent by your browser, specifically the "Host:" line in the HTML GET request. I can't find this plugin online. Does anyone know a plugin/way to do this? I am looking for one for Chrome but any would work. The specific reason for this is I am trying to work on a wordpress website which I just did a DNS change on. Until that DNS change goes into effect I can use the IP but since its a shared host the Host line isn't set right.

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Running django custom management commands with supervisord

    - by mfsaint
    I'd like to use supervisord to run some commands for my Django project but I keep getting the following error: supervisor.log: 2012-05-18 17:52:15,784 INFO spawnerr: can't find command 'source' If I remove the "source" command, the log shows the same error: can't find command 'python'. supervisord.conf excerpt: [program:django] directory=/home/mf/projects/djangopj/ command=beanstalkd -l 127.0.0.1 -p 11300 command=source /home/mf/virtualenvs/env/bin/activate command=python manage.py command1 command=python manage.py command2 user=mf autostart=true autorestart=true I tried removing the directory and adding the absolute path to the commands but I kept getting the same error. I run supervisord with the following command: supervisord -c supervisord.conf -l supervisor.log

    Read the article

  • Percona MySQL 5.5 fails to start

    - by keymone
    trying to setup new server here but keep getting this in error log: mysqld_safe Starting mysqld daemon with databases from /data/mysql/myisam [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Warning] Can't create test file /data/mysql/myisam/hostname.lower-test [Note] Flashcache bypass: disabled [Note] Flashcache setup error is : setmntent failed /usr/sbin/mysqld: File '/var/mysql/bin/bin-log.index' not found (Errcode: 13) [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended everything under /data/mysql (it's ibdata and myisam folders) is owned my mysql:mysql and has proper permissions same goes for folders with bin and relay logs under /var/mysql apparmor is purged from server any ideas? PS it seems like something else apart from apparmor is affecting permissions to access mysql files after i changed data directory to more default one - /var/lib/mysql and "Can't create test file" error is gone, but "'/var/mysql/bin/bin-log.index' not found (Errcode: 13)" is still there PPS so i installed apparmor back and added all folders to mysqld's profile and errors mentioned above are now gone(or mysql doesn't even get to that point now) what i have now is this: /usr/sbin/mysqld: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory banging my head against the wall.

    Read the article

  • Superpower Your Touchpad Computer with Scrybe

    - by Matthew Guay
    Are you looking for a way to help your Touchpad computer make you more productive?  Here’s a quick look at Scrybe, a new application from Synaptics that lets you superpower it. Touchpad devices have become increasingly more interesting as they’ve included support for multi-touch gestures.  Scrybe takes it to the next level and lets you use your touchpad as an application launcher.  You can launch any application, website, or complete many common commands on your computer with a simple gesture.  Scrybe works with most modern Synaptics touchpads, which are standard on most laptops and netbooks.  It is optimized for newer multi-touch touchpads, but can also work with standard single-touch touchpads.  It works on Windows 7, Vista, and XP, so chances are it will work with your laptop or netbook. Get Started With Scrybe Head over to the Scrybe website and download the latest version (link below).  You are asked to enter your email address, name, and information about your computer…but you actually only have to enter your email address.  Click Download when finished. Run the installer when it’s download.  It will automatically download the latest Synaptics driver for your touchpad and any other components needed for Scrybe.  Note that the Scrybe installer will ask to install the Yahoo! toolbar, so uncheck this to avoid adding this worthless browser toolbar. Using Scrybe To open an application or website with a gesture, press 3 fingers on your touchpad at once, or if your touchpad doesn’t support multi-touch gestures, then press Ctrl+Alt and press 1 finger on your touchpad.  This will open the Scrype input pane; start drawing a gesture, and you’ll see it on the grey square.  The input pane shows some default gestures you can try. Here we drew an “M”, which opens our default Music player.  As soon as you finish the gesture and lift up your finger, Scrybe will open the application or website you selected. A notification balloon will let you know what gesture was preformed. When you’re entering your gesture, the input pane will show white “ink”.  The “ink” will turn blue if the command is recognized, but will turn red if it isn’t.  If Scrybe doesn’t recognize your command, press 3 fingers and try again. Scrybe Control Panel You can open the Scrybe Control panel to enter or change commands by entering a box-like gesture, or right-clicking the Scrybe icon in your system tray and selecting “Scrybe Control Panel”. Scrybe has many pre-configured gestures that you can preview and even practice. All of the gestures in the Popular tab are preset and cannot be changed.  However, the ones in the favorites tab can be edited.  Select the gesture you wish to edit, and click the gear icon to change it.  Here we changed the email gesture to open Hotmail instead of the default Yahoo Mail. Scrybe can also help you perform many common Windows commands such as Copy and Undo.  Select the Tools tab to see all of these commands.   Scrybe has many settings you may wish to change.  Select the Preferences button in the Control Panel to change these.  Here’s some of the settings we changed. Uncheck “Display a message” to turn off the tooltip notifications when you enter a gesture Uncheck “Show symbol hints” to turn off the sidebar on the input pane Select the search engine you want to open with the Search Gesture.  The default is Yahoo, but you can choose your favorite. Adding a new Scrybe Gesture The default Scrybe options are useful, but the best part is that you can assign gestures to your own programs or websites.  Open the Scrybe control panel, and click the plus sign on the bottom left corner.  Enter a name for your gesture, and then choose if it is for a website or an application. If you want the gesture to open a website, enter the address in the box. Alternately, if you want your gesture to open an application, select Launch Application and then either enter the path to the application, or click the button beside the Launch field and browse to it. Now click the down arrow on the blue box and choose one of the gestures for your application or website. Your new gesture will show up under the Favorites tab in the Scrybe control panel, and you can use it whenever you want from Scrybe, or practice the gesture by selecting the Practice button. Conclusion If you enjoy multi-touch gestures, you may find Scrybe very useful on your laptop or netbook.  Scrybe recognizes gestures fairly easily, even if you don’t enter them perfectly correctly.  Just like pinch-to-zoom and two-finger scroll, Scrybe can quickly become something you miss on other laptops. Download Scrybe (registration required) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadRemove Synaptics Touchpad Icon from System TrayRoll Back Troublesome Device Drivers in Windows VistaChange Your Computer Name in Windows 7 or VistaLet Somebody Use Your Computer Without Logging Off in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • Apache VirtualHost running very slow on OS X 10.7 (Lion)

    - by jwerre
    I've set up a few virtual hosts in Lion and it's running very slowly. NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> ServerName dev.local DocumentRoot "/Users/me/mysite" <Directory /Users/me/mysite> Order allow,deny Allow from all </Directory> </VirtualHost> then in /etc/hosts I added 127.0.0.1 dev.local Everything works fine but it's sooooo slow — 5 or so second to reload a simple "Hello World" html page. Here's is the strange part. If I make a symbolic link of the site in my ~/Sites folder (ln -s ~/mysite ~/Sites/mysite) and navigate to http://localhost/~me/mysite It's nice and fast the way it should be.

    Read the article

  • Writing a "Hello World" Device Driver for kernel 2.6 using Eclipse

    - by Isaac
    Goal I am trying to write a simple device driver on Ubuntu. I want to do this using Eclipse (or a better IDE that is suitable for driver programming). Here is the code: #include <linux/module.h> static int __init hello_world( void ) { printk( "hello world!\n" ); return 0; } static void __exit goodbye_world( void ) { printk( "goodbye world!\n" ); } module_init( hello_world ); module_exit( goodbye_world ); My effort After some research, I decided to use Eclipse CTD for developing the driver (while I am still not sure if it supports multi-threading debugging tools). So I: Installed Ubuntu 11.04 desktop x86 on a VMWare virtual machine, Installed eclipse-cdt and linux-headers-2.6.38-8 using Synaptic Package Manager, Created a C Project named TestDriver1 and copy-pasted above code to it, Changed the default build command, make, to the following customized build command: make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 The problem I get an error when I try to build this project using eclipse. Here is the log for the build: **** Build of configuration Debug for project TestDriver1 **** make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 all make: Entering directory '/usr/src/linux-headers-2.6.38-8-generic' make: *** No rule to make target vmlinux', needed byall'. Stop. make: Leaving directory '/usr/src/linux-headers-2.6.38-8-generic' Interestingly, I get no error when I use shell instead of eclipse to build this project. To use shell, I just create a Makefile containing obj-m += TestDriver1.o and use the above make command to build. So, something must be wrong with the eclipse Makefile. Maybe it is looking for the vmlinux architecture (?) or something while current architecture is x86. Maybe it's because of VMWare? As I understood, eclipse creates the makefiles automatically and modifying it manually would cause errors in the future OR make managing makefile difficult. So, how can I compile this project on eclipse?

    Read the article

  • PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/apc.so'

    - by user3207586
    I have updated my php from 5.3.3 to 5.4.31. I have a Debian 6 Squeeze. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/apc.so' - /usr/lib/php5/20100525/apc.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 During the installation, the system ask me if I want to keep the actual php.ini or put the new one. I said to keep the actual one. Now, I have this 2 errors when I restart Apache.. What should I do to solve them?

    Read the article

  • Using rsync when files on one end are all lowercase

    - by DormoTheNord
    I want to rsync a lot of files from a Windows box to a Linux server. The problem is, the files on Windows are all mixed case, and the files on the linux server need to be all lowercase. One solution is to have a script that rsyncs to a different directory on the server, copy the files into the main directory, and then convert them all to lowercase. I'd rather find a more elegant solution, though. I'd prefer a command line application, but I'd be willing to go with a GUI application if that's the best option.

    Read the article

  • Best development architecture for a small team of programmers ( WAMP Stack )

    - by Tio
    Hi all.. I'm in the first month of work in a new company.. and after I met the two programmer's and asked how things are organized in terms of projects inside the company, they simply shrug their shoulders, and said that nothing is organized.. I think my jaw hit the ground that same time.. ( I know some, of you think I should quit, but I'm on a privileged position, I'm the most experienced there, so there's room for me to grow inside the company, and I'm taking the high road ).. So I talked to the IT guy, and one of the programmers, and maybe this week I'm going to get a server all to myself to start organizing things. I've used various architectures in my previous work experiences, on one I was developing in a server on the network ( no source control of course ).. another experience I had was developing in my local computer, with no server on the network, just source control. And at home, I have a mix of the two, everything I code is on a server on the network, and I have those folders under source control, and I also have a no-ip account configured on that server so I can access it everywhere and I can show the clients anything. For me I think this last solution ( the one I have at home ) is the best: Network server with WAMP stack. The server as a public IP so we can access it by domain name. And use subdomains for each project. Everybody works directly on the network server. I think the problem arises, when two or more people want to work on the same project, in this case the only way to do this is by using source control and local repositories, this is great, but I think this turns development a lot more complicated. In the example I gave, to make a change to the code, I would simply need to open the file in my favorite editor, make the change, alter the database, check in the changes into source control and presto all done. Using local repositories, I would have to get the latest version, run the scripts on the local database to update it, alter the file, alter the database, check in the changes to the network server, update the database on the network server, see if everything is running well on the network server, and presto all done, to me this seems overcomplicated for a change on a simple php page. I could share the database for the local development and for the network server, that sure would help. Maybe the best way to do this is just simply: Network server with WAMP stack ( test server so to speak ), public server accessible trough the web. LAMP stack on every developer computer ( minus the database ) We develop locally, test, then check in the changes into the server test and presto. What do you think? Maybe I should start doing this at home.. Thanks and best regards... Edit: I'm sorry I made a mistake and switched WAMP with LAMP, sorry about that..

    Read the article

  • Overview of getting and setting the URL and parts of the URL using angularjs and/or Javascript

    - by Sandy Good
    Getting and Setting the URL, and different parts of the URL are a basic part of Application Design. For Page Navigation Deep Linking Providing a link to the user Querying Data Passing information to other pages Both angularjs and javascript provide ways to get/set the URL and parts of the URL. I'm looking for the following information: Situation: Show a simple URL in the browser address bar to the user Provide a more detailed URL with string parameters to the page that the user will not see. In other words, two different URLs will be used, one simple one that the user sees in the browser, a more detailed one available to the page on load. Get URL info with PHP when then page intially loads, both don't reload the PHP page when the user needs more detailed info that is already loaded but not displayed yet. Set the URL with a more detailed URL for deep linking as the user drills down to more specific information. Get URL info in a controller or JavaSript when angularjs detects a change in the URL with routing. Hash or Query String or Both? Should I use a hash # in the URL, a string ?= or both? Here is what I currently know and what I want: A Query String HTTP:\\www.name.com?mykey=itemID will prevent angularjs from reloading the page. So, I can change the URL by adding/changing the string at the end, thereby providing new info to the page, and keep the page from reloading. I can change the URL and force a page reload with: window.location.href = "#Store/" + argUserPubId + "?itemID=home"; If home is the itemID string, I want code to simply load the page, and not display more detailed information. If there is a real itemID in the URL query string, I want the code to display the more detailed information. Code from angularjs will run either from the controller specified in the routing, or a controller specified in the HTML, or both. The angularjs code specified in the routing seems to run first, before the code specified in the HTML. A different URL for the page can be used in angularjs templateURL: than the URL that was sent to the browser address bar. when('/Store/:StoreId', { templateUrl: function(params){return 'Client_Pages/Stores.php?storeID=' + params.StoreId;}, controller: 'storeParseData' }). The above code detects http:\\www.name.com\Store\StoreID in the browser, but SENDS http:\\www.name.com\Client_Pages/Stores.php?storeID=StoreID to the page. In the above code, a function is used for the angularjs routing templateURL: to dynamically set the templateURL. So, when the user clicks something to see details of an item, how should I configure the URL? Should I use angularjs $location or window.location.href ? Should I use a longer URL with more parameters, a hash bang, or a query string? Should I use: http:\\www.name.com\Store\StoreID\ItemID or http:\\www.name.com\Store\StoreID#ItemID or http:\\www.name.com\Store\StoreID?ItemID or http:\\www.name.com\Store#StoreID?ItemID or Something else?

    Read the article

  • allowing sudo to delete certain files

    - by chandank
    I would like to allow to delete certain files in /tmp directory to sudo users. I have added the Allow_Cmnd /usr/sbin/userdel for sudo users but this does not delete all /tmp files associated with the user. So how shall I tweak the sudoers to allow them to delete certain files in /tmp directory only. I googled a bit but learned that regex may be be application at this. I tried couple of tweaks but its not working for me. I would like the users to have ability to execute command such as find /tmp -uid 10002 | grep joeuser | xargs rm -rf

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • How will Quantum computing affect us?

    - by CiscoIPPhone
    I am interested in quantum computing, but have not studied it in depth. Things like Shor's algorithm intrigue me. My question is: If quantum computing took off in a big way (i.e. functional quantum home computers were available) how would it affect us programmers and software developers? Would we have to learn how to make use of superposition and entanglement - would it change how we write algorithms? Would more mathematical programmers be required/would we need new skills? Would it change nothing at all from our perspective (i.e. would it be abstracted)? Your opinion is welcome.

    Read the article

  • htpasswd and htaccess in Plesk 9.3

    - by J White
    Greetings Here is my situation. I currently have one website on my dedicated server. As of now, I have protected the directory /exclusive using the Plesk control panel. I am having a billing company install their password management script on the server but they need the absolute location to the .htpasswd file. I can't find it or the .htaccess file. Would it be easier to unprotect the /exclusive directory and create the .htaccess file in notepad? If this is done, where should I place the .htpasswd file? -jw

    Read the article

  • Debuild fails to make package for bluelog-1.04

    - by Dean Howell
    When trying to build a package for bluelog, Debuild give several errors. In the past, I've used checkinstall to quickly build crude packages. I am now trying to do it the right way and upload to a PPA. Bluelog can be found here: http://www.digifail.com/software/bluelog.shtml Here is the output from debuild; dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package bluelog dpkg-buildpackage: source version 1.0.4-0ubuntu1 dpkg-buildpackage: source changed by Dean Howell <dean@unknown> dpkg-source --before-build bluelog dpkg-buildpackage: host architecture amd64 fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean make[1]: Entering directory `/home/dean/Launchpad Builds/bluelog/bluelog' rm -rf bluelog www/cgi-bin/* *.o *.txt *.log *.gz *.cgi make[1]: Leaving directory `/home/dean/Launchpad Builds/bluelog/bluelog' dh_clean dpkg-source -b bluelog dpkg-source: warning: Version number suggests Ubuntu changes, but Maintainer: does not have Ubuntu address dpkg-source: warning: Version number suggests Ubuntu changes, but there is no XSBC-Original-Maintainer field dpkg-source: info: using source format `3.0 (quilt)' dpkg-source: info: building bluelog using existing ./bluelog_1.0.4.orig.tar.gz dpkg-source: error: cannot represent change to bluelog/Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog: binary file contents changed dpkg-source: error: add Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog in debian/source/include-binaries if you want to store the modified binary in the debian tarball dpkg-source: error: unrepresentable changes to source dpkg-buildpackage: error: dpkg-source -b bluelog gave error exit status 2

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • activesync not working with forms based authentication

    - by Chad
    I have an Exchange 2003 SP2 single backend server with an SSL cert. I was having trouble getting OMA to work so I found a MS article about making a reg hack and creating a new Exchange-OMA virtual directory. I am able to connect and access content from my mailbox by using secure mail.domainname.com/oma and using my credentials. ActiveSync was not working on a Windows mobile phone or iPhone. I found another article about using Forms Based Authenication and SSL on a single Exchange server environment and the fix was to elliminate FBA and SSL for the Exchange virtual directory. That allows ActiveSync to now work. I have very few mobile users, but they are management, so I need to make ActiveSync work but I would like to get back to using SSL. http://support.microsoft.com/kb/817379 Any ideas about this setup? Thanks.

    Read the article

  • application/x-httpd-php opening rather than pages?

    - by GuyNoir
    For some reason, no matter what page I try to open (.php, .html, .html, etc) my server trys to save it, rather than displaying the page. However, I do get the "It Works!" page for the main domain, it's only when I try to display a page on a sub-directory that it fails. I researched the problem, and I placed a .htaccess file in the directory of the pages that I'm attempting open with the code: AddType application/x-httpd-php5 .php .html .htm AddHandler applicaton/x-httpd-php5 .php .htm .html .my I'm running the latest version of apache and php5 which is installed as the "php5" module. Any help? http://i.stack.imgur.com/fh6dj.png

    Read the article

  • Installation of large programs in OSX.

    - by archagon
    A few newbie OSX questions: Despite the fact that most applications can be installed by dragging them to the Applications directory, some software still requires the creation of a separate program folder. Where should I put this folder? Does it matter? Is the Applications directory special somehow, or is it just a convenient folder with a custom icon? If I move one of these program folders later on, will the program still work? Will shortcuts to files in the folder break? Is there something similar to a registry in OSX? Thanks!

    Read the article

  • Apple Aluminum Keyboard Via Bluetooth - Fn Key Problem

    - by Richard
    I'm connecting an Apple Bluetooth Aluminum keyboard (this one) to my Lubuntu setup using the blueman applet. The keyboard types fine, but I would like to use its fn key to change screen brightness changing, page-up (fn+ctrl+down), page-down (fn+ctrl+up), et cetera. Right now the fn key doesn't seem to work. When I use xev, I don't see anything happen when I press fn. Does the keyboard not send this to the computer at all? Do I need to configure blueman's "Input Service" setting to make this an Apple (rather than a generic) keyboard? (It's not obvious how to do this.) Is xev just not showing the fn key? Where in this stack of software do I need to make a change to achieve the desired behaviour? Thanks!

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >