Search Results

Search found 20029 results on 802 pages for 'directory permissions'.

Page 332/802 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • http://localhost/~admin/ gets a 403

    - by Pavan Katepalli
    When I go to localhost/~admin/ or 127.0.0.1/~admin/ my browser says: "Forbidden You don't have permission to access /~admin/ on this server." How do I change this?!??!?! It's driving me nuts! when I go to localhost or 127.0.0.1/ my browser says: "It Works!". I'm running mac osx 10.8. I created aliases in my .bash_profile file so that I can start, restart and stop Apache quickly: alias startApache="sudo apachectl start" alias stopApache="sudo apachectl stop" alias restartApache="sudo apachectl restart" In my /etc/apache2/httpd.conf file I turned on php5: LoadModule php5_module libexec/apache2/libphp5.so I also made sure to change the permissions for my admin.conf file with this command in terminal: sudo chmod 644 username.conf This is my /etc/apache2/users/admin.conf: <Directory "/Users/admin/Sites/"> Options Indexes MultiViews AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • xcopy file, suppress &ldquo;Does xxx specify a file name&hellip;&rdquo; message

    - by MarkPearl
    Today we had an interesting problem with file copying. We wanted to use xcopy to copy a file from one location to another and rename the copied file but do this impersonating another user. Getting the impersonation to work was fairly simple, however we then had the challenge of getting xcopy to work. The problem was that xcopy kept prompting us with a prompt similar to the following… Does file.xxx specify a file name or directory name on the target (F = file, D = directory)? At which point we needed to press ‘Y’. This seems to be a fairly common challenge with xcopy, as illustrated by the following stack overflow link… One of the solutions was to do the following… echo f | xcopy /f /y srcfile destfile This is fine if you are running from the command prompt, but if you are triggering this from c# how could we daisy chain a bunch of commands…. The solution was fairly simple, we eventually ended up with the following method… public void Copy(string initialFile, string targetFile) { string xcopyExe = @"C:\windows\system32\xcopy.exe"; string cmdExe = @"C:\windows\system32\cmd.exe"; ProcessStartInfo p = new ProcessStartInfo(); p.FileName = cmdExe; p.Arguments = string.Format(@"/c echo f | {2} {0} {1} /Y", initialFile, targetFile, xcopyExe); Process.Start(p); } Where we wrapped the commands we wanted to chain as arguments and instead of calling xcopy directly, we called cmd.exe passing xcopy as an argument.

    Read the article

  • Apache Passenger can't find gem

    - by purpletonic
    I'm running Ubuntu 10.04 and I've transferred over some sites built in Sinatra. I've set up Phusion passenger, but when I visit the sites I'm getting a Passenger LoadError claiming that passenger has 'no such file to load -- sinatra' yet when I run gem list or sudo gem list, I clearly see sinatra listed. Why can't passenger find this gem? My sudo gem env output looks like this RubyGems Environment: - RUBYGEMS VERSION: 1.3.5 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux] - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/local/bin/ruby - EXECUTABLE DIRECTORY: /usr/local/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /usr/local/lib/ruby/gems/1.8 - /root/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources = true - :verbose = true - :benchmark = false - :backtrace = false - :bulk_threshold = 1000 - REMOTE SOURCES: - http://gems.rubyforge.org/ running 'sudo ruby -v' I see the following: ruby 1.8.7 (2009-12-24 patchlevel 248) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2010.01 Is that correct, or should the two ruby versions match up correctly, displaying REE in both? Thanks in advance!

    Read the article

  • Is it possible to skip .rvmrc confirmation?

    - by Viacheslav Molokov
    We are using RVM for managing Ruby installations and environments. Usually we are using this .rvmrc script: #!/bin/bash if [ ! -e '.version' ]; then VERSION=`pwd | sed 's/[a-z/-]//g'` echo $VERSION > .version rvm gemset create $VERSION fi VERSION=`cat .version` rvm use 1.9.2@$VERSION This script forces RVM to create new gem environment for each our project/version. But each time we was deploying new version RVM asks us to confirm new .rvmrc file. When we cd to this directory first time, we are getting something like: =============================================================== = NOTICE: = =============================================================== = RVM has encountered a not yet trusted .rvmrc file in the = = current working directory which may contain nasty code. = = = = Examine the contents of this file to be sure the contents = = are good before trusting it! = = = = Press 'q' to exit the reader when finished reading the file = =============================================================== (press enter to continue when ready) This is not as bad for development environments, but with auto deploy it require to manually confirm each new version on each server. Is it possible to skip this confirmation?

    Read the article

  • Firewall Authentication - logon failed

    - by RoseofPurple
    I am attempting to use a Watchguard firebox 550e with Fireware XTM 11 to authenticate incoming traffic for RDP access. I have configured the firewall to use my domain controller for Active directory authentication with a Windows 2000 server farm and added a couple of user accounts to the users list in the firewall, but when I attempt to log onto the authentication page for the firewall, I get Logon failed. I know that the user names work and that the passwords are correct. I am also certain that I have told it to log on using Active Directory instead of the FireboxDB. I have tried using the username alone, the domain\username, and the email address. I believe that the Search base is correct (DC=mydomainname,DC=com), and I did not change any defaults for sAMAccountName (and I do not recall making any changes to those items when configuring the domain structure). Any assistance would be appreciated.

    Read the article

  • Setting up Ubuntu Server on Amazon EC2 for hosting multiple domains with wildcard subdomains

    - by Ashish Kumar
    I'm trying to set up multiple domains on my Amazon EC2 micro instance running Ubuntu Server 12.04. I installed Apache correctly and set up virtual hosts but having problems with wildcard subdomains. This is what my httpd.conf file looks like NameVirtualHost *:80 <VirtualHost *:80> UseCanonicalName Off VirtualDocumentRoot /home/username/domains/%0/html/ </VirtualHost> My DNS records (on Amazon Route 53) are: domain.tld A 1.2.3.4 *.domain.tld A 1.2.3.4 If i create a test.domain.tld directory with the html subdirectory, it works fine. But what I want to do is to redirect *.domain.tld to domain.tld in case there is no directory for the sub-domain accessed. I would also like www.domain.tld to redirect to domain.tld. The system should also work if I decide to host another website, example.com, on the server. I tried Googling a lot but without any luck. Suggestions?

    Read the article

  • Create mix CDs from MP3 files

    - by Dave Jarvis
    How would you write a script (preferably for the Windows commandline) that: Examines thousands of MP3 files stored on a single drive (e.g., G:\) Randomizes the collection Populates a series of directories up to 650MB worth of songs (without exceeding 650MB) Every song is shucked exactly once (Optional) The directory size comes as close as possible to 650MB The DIR, COPY, and XCOPY commands have no explicit file size switches. A few Google searches have come up with: File size condition in DOS Cygwin and UWIN DOS File sizes It would be ideal if UNIX-like environments can be avoided. My question, then: How do you compare file (or directory) sizes using the Windows commandline?

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured a virtual host in my local machine and every thing is working fine. Now I want to use SEO friendly urls. To achieve this I have used the .htaccess file. My virtual host configuration is: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htaccess file has: AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htaccess setting is not working. After that I modified my virtual host setting and it is working. The new virtual host setting is: <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please let me know where I am going wrong in the .htacess file for url rewriting. I do not want to use the settings in virtual host, since for every change I have restart apache.

    Read the article

  • How to start vim without executing /etc/vimrc?

    - by florin
    On my Linux server at work, the admins did not install cscope, and I installed it from source in my home directory and added it to the $PATH. The trouble is, the /etc/vimrc has a reference to /usr/bin/cscope which does not exist and everytime I start vim, it complains about that and I have to press for that message to go away. It is interesting that if I remove cscope from my $PATH, I don't get that behavior - so it is possible that vim is testing that cscope exists somewhere, and only then executing the cscope configuration - but then it gets it wrong! So my question is: can I set something up in my .vimrc so it does not source the global /etc/vimrc? I don't want to move cscope out of PATH, as I don't want to type the full directory name every time I run it from the command line.

    Read the article

  • linux firefox flash debug: i don't get alerts when errors occur

    - by ufk
    Hello. I use firefox 3.6.13 on linux gentoo amd64 and flash 10.2 debug version, i also have firebug flashbug and flashfirebug installed. my problem is that whenever I run a flash application that encounters an error, the browser does not open an alert window. it makes it more difficult to debug applications because i need to make sure that the firebug window is always opened. the only way i can see the error is if i open the flash tab in firebug. is there a way to change this behavior ? i checked .xsession-errors and tried running firefox from console: these are the only errors the i see in the log and these errors do not appear when the flash app encounters an error which means they are not related: Gtk-Message: Failed to load module "canberra-gtk-module": libcanberra-gtk-module.so: cannot open shared object file: No such file or directory Gtk-Message: Failed to load module "gnomebreakpad": libgnomebreakpad.so: cannot open shared object file: No such file or directory thanks

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Running django custom management commands with supervisord

    - by mfsaint
    I'd like to use supervisord to run some commands for my Django project but I keep getting the following error: supervisor.log: 2012-05-18 17:52:15,784 INFO spawnerr: can't find command 'source' If I remove the "source" command, the log shows the same error: can't find command 'python'. supervisord.conf excerpt: [program:django] directory=/home/mf/projects/djangopj/ command=beanstalkd -l 127.0.0.1 -p 11300 command=source /home/mf/virtualenvs/env/bin/activate command=python manage.py command1 command=python manage.py command2 user=mf autostart=true autorestart=true I tried removing the directory and adding the absolute path to the commands but I kept getting the same error. I run supervisord with the following command: supervisord -c supervisord.conf -l supervisor.log

    Read the article

  • Apache VirtualHost running very slow on OS X 10.7 (Lion)

    - by jwerre
    I've set up a few virtual hosts in Lion and it's running very slowly. NameVirtualHost *:80 <VirtualHost *:80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> ServerName dev.local DocumentRoot "/Users/me/mysite" <Directory /Users/me/mysite> Order allow,deny Allow from all </Directory> </VirtualHost> then in /etc/hosts I added 127.0.0.1 dev.local Everything works fine but it's sooooo slow — 5 or so second to reload a simple "Hello World" html page. Here's is the strange part. If I make a symbolic link of the site in my ~/Sites folder (ln -s ~/mysite ~/Sites/mysite) and navigate to http://localhost/~me/mysite It's nice and fast the way it should be.

    Read the article

  • PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/apc.so'

    - by user3207586
    I have updated my php from 5.3.3 to 5.4.31. I have a Debian 6 Squeeze. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/apc.so' - /usr/lib/php5/20100525/apc.so: cannot open shared object file: No such file or directory in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 During the installation, the system ask me if I want to keep the actual php.ini or put the new one. I said to keep the actual one. Now, I have this 2 errors when I restart Apache.. What should I do to solve them?

    Read the article

  • Writing a "Hello World" Device Driver for kernel 2.6 using Eclipse

    - by Isaac
    Goal I am trying to write a simple device driver on Ubuntu. I want to do this using Eclipse (or a better IDE that is suitable for driver programming). Here is the code: #include <linux/module.h> static int __init hello_world( void ) { printk( "hello world!\n" ); return 0; } static void __exit goodbye_world( void ) { printk( "goodbye world!\n" ); } module_init( hello_world ); module_exit( goodbye_world ); My effort After some research, I decided to use Eclipse CTD for developing the driver (while I am still not sure if it supports multi-threading debugging tools). So I: Installed Ubuntu 11.04 desktop x86 on a VMWare virtual machine, Installed eclipse-cdt and linux-headers-2.6.38-8 using Synaptic Package Manager, Created a C Project named TestDriver1 and copy-pasted above code to it, Changed the default build command, make, to the following customized build command: make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 The problem I get an error when I try to build this project using eclipse. Here is the log for the build: **** Build of configuration Debug for project TestDriver1 **** make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 all make: Entering directory '/usr/src/linux-headers-2.6.38-8-generic' make: *** No rule to make target vmlinux', needed byall'. Stop. make: Leaving directory '/usr/src/linux-headers-2.6.38-8-generic' Interestingly, I get no error when I use shell instead of eclipse to build this project. To use shell, I just create a Makefile containing obj-m += TestDriver1.o and use the above make command to build. So, something must be wrong with the eclipse Makefile. Maybe it is looking for the vmlinux architecture (?) or something while current architecture is x86. Maybe it's because of VMWare? As I understood, eclipse creates the makefiles automatically and modifying it manually would cause errors in the future OR make managing makefile difficult. So, how can I compile this project on eclipse?

    Read the article

  • Using rsync when files on one end are all lowercase

    - by DormoTheNord
    I want to rsync a lot of files from a Windows box to a Linux server. The problem is, the files on Windows are all mixed case, and the files on the linux server need to be all lowercase. One solution is to have a script that rsyncs to a different directory on the server, copy the files into the main directory, and then convert them all to lowercase. I'd rather find a more elegant solution, though. I'd prefer a command line application, but I'd be willing to go with a GUI application if that's the best option.

    Read the article

  • allowing sudo to delete certain files

    - by chandank
    I would like to allow to delete certain files in /tmp directory to sudo users. I have added the Allow_Cmnd /usr/sbin/userdel for sudo users but this does not delete all /tmp files associated with the user. So how shall I tweak the sudoers to allow them to delete certain files in /tmp directory only. I googled a bit but learned that regex may be be application at this. I tried couple of tweaks but its not working for me. I would like the users to have ability to execute command such as find /tmp -uid 10002 | grep joeuser | xargs rm -rf

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • htpasswd and htaccess in Plesk 9.3

    - by J White
    Greetings Here is my situation. I currently have one website on my dedicated server. As of now, I have protected the directory /exclusive using the Plesk control panel. I am having a billing company install their password management script on the server but they need the absolute location to the .htpasswd file. I can't find it or the .htaccess file. Would it be easier to unprotect the /exclusive directory and create the .htaccess file in notepad? If this is done, where should I place the .htpasswd file? -jw

    Read the article

  • Debuild fails to make package for bluelog-1.04

    - by Dean Howell
    When trying to build a package for bluelog, Debuild give several errors. In the past, I've used checkinstall to quickly build crude packages. I am now trying to do it the right way and upload to a PPA. Bluelog can be found here: http://www.digifail.com/software/bluelog.shtml Here is the output from debuild; dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): -D_FORTIFY_SOURCE=2 dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions -Wl,-z,relro dpkg-buildpackage: source package bluelog dpkg-buildpackage: source version 1.0.4-0ubuntu1 dpkg-buildpackage: source changed by Dean Howell <dean@unknown> dpkg-source --before-build bluelog dpkg-buildpackage: host architecture amd64 fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean make[1]: Entering directory `/home/dean/Launchpad Builds/bluelog/bluelog' rm -rf bluelog www/cgi-bin/* *.o *.txt *.log *.gz *.cgi make[1]: Leaving directory `/home/dean/Launchpad Builds/bluelog/bluelog' dh_clean dpkg-source -b bluelog dpkg-source: warning: Version number suggests Ubuntu changes, but Maintainer: does not have Ubuntu address dpkg-source: warning: Version number suggests Ubuntu changes, but there is no XSBC-Original-Maintainer field dpkg-source: info: using source format `3.0 (quilt)' dpkg-source: info: building bluelog using existing ./bluelog_1.0.4.orig.tar.gz dpkg-source: error: cannot represent change to bluelog/Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog: binary file contents changed dpkg-source: error: add Builds/bluelog/bluelog/debian/bluelog/usr/bin/bluelog in debian/source/include-binaries if you want to store the modified binary in the debian tarball dpkg-source: error: unrepresentable changes to source dpkg-buildpackage: error: dpkg-source -b bluelog gave error exit status 2

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • Plugin 'InnoDB' registration as a STORAGE ENGINE failed. On win 7

    - by NimChimpsky
    I have had to reinstall MySQL, however the service is failing to start with the above cause listed in evnt viewer. One solution is apparently to delete a couple of files prefixed with 'ib_logfile' which represent any old databases. However I do not have these files, and my service is still failing to start ... ? When I say I don't have these files I did a search using the windows search with zero results, and they are definitely not present in my mysql install directory. And I don't have the "documents and setting/appilcation data' folder referenced in link. In fact I only have only one mysql install directory, I know where that is - what do I need to delete/change ? The instance is configured OK, I ran that as administrator and it is listed in services, but the service itself fails to start Any tips, other than going over to postgresql ?

    Read the article

  • Oracle on Oracle: Is that all?

    - by Darin Pendergraft
    On October 17th, I posted a short blog and a podcast interview with Chirag Andani, talking about how Oracle IT uses its own IDM products. Blog link here. In response, I received a comment from reader Jaime Cardoso ([email protected]) who posted: “- You could have talked about how by deploying Oracle's Open standards base technology you were able to integrate any new system in your infrastructure in days. - You could have talked about how by deploying federation you were enabling the business side to keep all their options open in terms of companies to buy and sell while maintaining perfect employee and customer's single view. - You could have talked about how you are now able to cut response times to your audit and security teams into 1/10th of your former times Instead you spent 6 minutes talking about single sign on and self provisioning? If I didn't knew your IDM offer so well I would now be wondering what its differences from Microsoft's offer was. Sorry for not giving a positive comment here but, please your IDM suite is very good and, you simply aren't promoting it well enough” So I decided to send Jaime a note asking him about his experience, and to get his perspective on what makes the Oracle products great. What I found out is that Jaime is a very experienced IDM Architect with several major projects under his belt. Darin Pendergraft: Can you tell me a bit about your experience? How long have you worked in IT, and what is your IDM experience? Jaime Cardoso: I started working in "serious" IT in 1998 when I became Netscape's technical specialist in Portugal. Netscape Portugal didn't exist so, I was working for their VAR here. Most of my work at the time was with Netscape's mail server and LDAP server. Since that time I've been bouncing between the system's side like Sun resellers, Solaris stuff and even worked with Sun's Engineering in the making of an Hierarchical Storage Product (Sun CIS if you know it) and the application's side, mostly in LDAP and IDM. Over the years I've been doing support, service delivery and pre-sales / architecture design of IDM solutions in most big customers in Portugal, to name a few projects: - The first European deployment of Sun Access Manager (SAPO – Portugal Telecom) - The identity repository of 5/5 of the Biggest Portuguese banks - The Portuguese government federation of services project DP: OK, in your blog response, you mentioned 3 topics: 1. Using Oracle's standards based architecture; (you) were able to integrate any new system in days: can you give an example? What systems, how long did it take, number of apps/users/accounts/roles etc. JC: It's relatively easy to design a user management strategy for a static environment, or if you simply assume that you're an <insert vendor here> shop and all your systems will bow to that vendor's will. We've all seen that path, the use of proprietary technologies in interoperability solutions but, then reality kicks in. As an ISP I recall that I made the technical decision to use Active Directory as a central authentication system for the entire IT infrastructure. Clients, systems, apps, everything was there. As a good part of the systems and apps were running on UNIX, then a connector became needed in order to have UNIX boxes to authenticate against AD. And, that strategy worked but, each new machine required the component to be installed, monitoring had to be made for that component and each new app had to be independently certified. A self care user portal was an ongoing project, AD access assumes the client is inside the domain, something the ISP's customers (and UNIX boxes) weren't nor had any intention of ever being. When the Windows 2008 rollout was done, Microsoft changed the Active Directory interface. The Windows administrators didn't have enough know-how about directories and the way systems outside the MS world behaved so, on the go live, things weren't properly tested and a general outage followed. Several hours and 1 roll back later, everything was back working. But, the ISP still had to change all of its applications to work with the new access methods and reset the effort spent on the self service user portal. To keep with the same strategy, they would also have to trust Microsoft not to change interfaces again. Simply by putting up an Oracle LDAP server in the middle and replicating the user info from the AD into LDAP, most of the problems went away. Even systems for which no AD connector existed had PAM in them so, integration was made at the OS level, fully supported by the OS supplier. Sun Identity Manager already had a self care portal, combined with a user workflow so, all the clearances had to be given before the account was created or updated. Adding a new system as a client for these authentication services was simply a new checkbox in the OS installer and, even True64 systems were, for the first time integrated also with a 5 minute work of a junior system admin. True, all the windows clients and MS apps still went to the AD for their authentication needs so, from the start everybody knew that they weren't 100% free of migration pains but, now they had a single point of problems to look at. If you're looking for numbers: - 500K directory entries (users) - 2-300 systems After the initial setup, I personally integrated about 20 systems / apps against LDAP in 1 day while being watched by the different IT teams. The internal IT staff did the rest. DP: 2. Using Federation allows the business to keep options open for buying and selling companies, and yet maintain a single view for both employee and customer. What do you mean by this? Can you give an example? JC: The market is dynamic. The company that's being bought today tomorrow will be sold again. Companies that spread on different markets may see the regulator forcing a sale of part of a company due to monopoly reasons and companies that are in multiple countries have to comply with different legislations. Our job, as IT architects, while addressing the customers and employees authentication services, is quite hard and, quite contrary. On one hand, we need to give access to all of our employees to the relevant systems, apps and resources and, we already have marketing talking with us trying to find out who's a customer of the bough company but not from ours to address. On the other hand, we have to do that and keep in mind we may have to break up all that effort and that different countries legislation may became a problem with a full integration plan. That's a job for user Federation. you don't want to be the one who's telling your President that he will sell that business unit without it's customer's database (making the deal worth a lot less) or that the buyer will take with him a copy of your entire customer's database. Federation enables you to start controlling permissions to users outside of your traditional authentication realm. So what if the people of that company you just bought are keeping their old logins? Do you want, because of that, to have a dedicated system for their expenses reports? And do you want to keep their sales (and pre-sales) people out of the loop in terms of your group's path? Control the information flow, establish a Federation trust circle and give access to your apps to users that haven't (yet?) been brought into your internal login systems. You can still see your users in a unified view, you obviously control if a user has access to any particular application, either that user is in your local database or stored in a directory on the other side of the world. DP: 3. Cut response times of audit and security teams to 1/10. Is this a real number? Can you give an example? JC: No, I don't have any backing for this number. One of the companies I did system Administration for has a SOX compliance policy in place (I remind you that I live in Portugal so, this definition of SOX may be somewhat different from what you're used to) and, every time the audit team says they'll do another audit, we have to negotiate with them the size of the sample and we spend about 15 man/days gathering all the required info they ask. I did some work with Sun's Identity auditor and, from what I've been seeing, Oracle's product is even better and, I've seen that most of the information they ask would have been provided in a few hours with the help of this tool. I do stand by what I said here but, to be honest, someone from Identity Auditor team would do a much better job than me explaining this time savings. Jaime is right: the Oracle IDM products have a lot of business value, and Oracle IT is using them for a lot more than I was able to cover in the short podcast that I posted. I want to thank Jaime for his comments and perspective. We want these blog posts to be informative and honest – so if you have feedback for the Oracle IDM team on any topic discussed here, please post your comments below.

    Read the article

  • activesync not working with forms based authentication

    - by Chad
    I have an Exchange 2003 SP2 single backend server with an SSL cert. I was having trouble getting OMA to work so I found a MS article about making a reg hack and creating a new Exchange-OMA virtual directory. I am able to connect and access content from my mailbox by using secure mail.domainname.com/oma and using my credentials. ActiveSync was not working on a Windows mobile phone or iPhone. I found another article about using Forms Based Authenication and SSL on a single Exchange server environment and the fix was to elliminate FBA and SSL for the Exchange virtual directory. That allows ActiveSync to now work. I have very few mobile users, but they are management, so I need to make ActiveSync work but I would like to get back to using SSL. http://support.microsoft.com/kb/817379 Any ideas about this setup? Thanks.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >