Search Results

Search found 24117 results on 965 pages for 'write'.

Page 461/965 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • How to set up server/domain name correctly in hosts file with HTTPS

    - by Byakugan
    I am trying to do local network and I am using these kind of types of network. 1) Main server which connects to internet with static IP 2) Second computer connected to first one locally with address like 192.168.0.2 - when I write this address to address line it is like i wrote localhost in original main server - so it should show my local web browser etc ... It has domain name this IP and connected router for it ... example www.domain.com so I added to my main server hosts file (linux powered) lines like these: 192.168.0.2 domain.com www.domain.com It was working ok when I entered my domain name in local computer it showed my site ... But after some time I added HTTPS cerfiticate and added this line to my apatche server: Redirect permanent / https://www.domain.com/ And now it does not work even when i add something like this to my hosts file: 192.168.0.2 https://www.domain.com So any idea how do do this thing work? Thank you.

    Read the article

  • copSHH how to restrict user from going back from there main root

    - by minus4
    I have installed SFTP on a windows servers using copSSH and all is good and it works well however you can go back from the main root. For example when i use C:\copSSH\home{username} as that user i can go back into copSSH and into them directories too. And I have a user setup to actually be C:\inetpub\wwwroot but that user can go into the system and everything i have this set as my path /cygdrive/c/inetpub/wwwroot It would be ideal if the user could only go forward from the start directory, rather than out and about there is no write ability but there is read and download....... thanks

    Read the article

  • failover cluster file replication

    - by user156144
    I have a Windows 2008 R2 failover cluster server. I am going to move one of our window services onto this new server. The service writes some trace information to a log file on the local harddrive. This will become a problem when it is moved to cluster server when cluster A become unavailable and cluster B takes over and now there are 2 places where I need to look for log files. Is there a way to make sure regardless of which cluster is on, I get one complete log file? I have been researching this and there is something called DFS replication but i was wondering if there is something better that works with failover cluster... I prefer not having to update my code. I can specify it to write log files to a different location by changing app.config file but no code change...

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • Can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate?

    - by jt0dd
    There are many situations where the validity of the timestamp attached to a certain post (submission of information) might be invaluable for the post owner's legal usage. I'm not looking for a service to achieve this, as requested in this great question, but rather a method for the achievement of such a service. For the legal (in most any law system) authentication of text content and its submission time, the owner of the content would need to prove: that the timestamp itself has not been altered and was accurate to begin with. that the text content linked to the timestamp had not been altered I'd like to know how to achieve this via programming (not a language-specific solution, but rather the methodology behind the solution). Can a timestamp be validated to being accurate to the time that the content was really submitted? Can data be stored in a form that it can be read, but not written to, in a proven way? In other words, can I save & store a user's submission in a way that proves that the data has not been altered, and that the timestamp is accurate? I can't think of any programming method that would make this possible, but I am not the most experienced programmer out there. Based on MidnightLightning's answer to the question I cited, this sort of thing is being done. Clarification: I'm looking for a method (hashing, encryption, etc) that would allow an average guy like me to achieve the desired effect through programming. I'm interested in this subject for the purpose of Defensive Publication. I'd like to learn a method that allows an every-day programmer to pick up his computer, write a program, pass information through it, and say: I created this text at this moment in time, and I can prove it. This means the information should be protected from the programmer who writes the code as well. Perhaps a 3rd party API would be required. I'm ok with that.

    Read the article

  • Performance Alert Writing to event Log but not running program

    - by TooFat
    I followed the instructions here How to create and configure performance alerts in Windows Server 2003 to set up an alert if the available logical disk space on one of my drives goes below a certain number. I selected the option to write to the application event log and select the "run this program" option and put in the path to a script that sends me an email. If I copy the path to the script and run it everything works and I get the email. When I start the alert I can see that the limit I set is being exceeded and the logs are being written to the application log, but the email is never being sent. I have the runas user and pword set to a Domain Admin. If I make the "run this program path" to C:\Windows\System32\calc.exe" it also doesn't start up the calculator. The Performance Logs and alerts services is running as Local Admin with allow to interact with desktop. What am I doing wrong?

    Read the article

  • Google+ Platform Office Hours: I/O Recap

    Google+ Platform Office Hours: I/O Recap This week we talked about Google I/O and reviewed some of the new Google+ platform features that were announced. Join the discussion about this session on Google+: goo.gl 0:10 - Introductions 2:55 - Stories about Google I/O 2012 #io12 8:58 - The Sun is introduced 9:40 - A brief introduction to the History API 15:56 - Sign up for the History API developer preview 17:13 - How to request a new moment type 17:54 - Abraham and the History API at #iohack 19:33 - Is the History API a Google+ write API? 21:03 - The Sun joins our office hours (Thanks Chris Ridgeway!) 24:00 - Does the history API work in a hangout yet? 24:55 - Can Google+ Pages use the history API? 26:40 - Should I use the official ruby Google API client library? 28:48 - Should I index Google+ users by their profile ID or their email address? 29:50 - Hangouts at I/O 34:58 - Will Google+ history work with Gmail? 36:05 - Does comments tracker work with events? 36:25 - When will Hangouts On Air work in Germany? 36:23 - Can we have screen capture of hangout video for use in the History API? 39:50 - Can I run more than one Hangout App simultaneously? From: GoogleDevelopers Views: 242 12 ratings Time: 41:16 More in Science & Technology

    Read the article

  • Can't make SELinux context types permanent with semanage

    - by Safado
    I created a new folder at /modevasive to hold my mod_evasive scripts and for the Log Directory. I'm trying to change the context type to httpd_sys_content_t so Apache can write to the folder. I did semanage fcontext -a -t "httpd_sys_content_t" /modevasive to change the context and then restorecon -v /modevasive to enable the change, but restorecon didn't do anything. So I used chcon to change it manually, did the restorecon to see what would happen and it changed it back to default_t. semanage fcontext -l gives: /modevasive/ all files system_u:object_r:httpd_sys_content_t:s0` And looking at /etc/selinux/targeted/contexts/files/file_contexts.local gives /modevasive/ system_u:object_r:httpd_sys_content_t:s0 So why does restorecon keep setting it back to default_t?

    Read the article

  • Operation MVC

    - by Ken Lovely, MCSE, MCDBA, MCTS
    It was time to create a new site. I figured VS 2010 is out so I should write it using MVC and Entity Framework. I have been very happy with MVC. My boss has had me making an administration web site in MVC2 but using 2008. I think one of the greatest features of MVC is you get to work with root of the app. It is kind of like being an iron worker; you get to work with the metal, mold it from scratch. Getting my articles out of my database and onto web pages was by far easier with MVC than it was with regular ASP.NET. This code is what I use to post the article to that page. It's pretty straightforward. The link in the menu is passes the id which is simply the url to the page. It looks for that url in the database and returns the rest of the article.   DataResults dr = new DataResults(); string title = string.Empty; string article = string.Empty; foreach (var D in dr.ReturnArticle(ViewData["PageName"].ToString())) { title = D.Title; article = D.Article; } public   List<CurrentArticle> ReturnArticle(string id) { var resultlist = new List<CurrentArticle>(); DBDataContext context = new DBDataContext(); var results = from D in context.MyContents where D.MVCURL.Contains(id) select D;foreach (var result in results) { CurrentArticle ca = new CurrentArticle(); ca.Title = result.Title; ca.Article = result.Article; ca.Summary = result.Summary; resultlist.Add(ca); } return resultlist;}

    Read the article

  • Converting a JD Edwards Date to a System.DateTime

    - by Christopher House
    I'm working on moving some data from JD Edwards to a SQL Server database using SSIS and needed to deal with the way in which JDE stores dates.  The format is CYYDDD, where: C = century, 1 for >= 2000 and 0 for < 2000 YY = the last two digits of the year DDD = the number of the day.  Jan 1 = 1, Dec. 31 = 365 (or 366 in a leap year) The .Net base class library has lots of good support for handling dates, but nothing as specific as the JD Edwards format, so I needed to write a bit of code to translate the JDE format to System.DateTime.  The function is below: public static DateTime FromJdeDate(double jdeDate) {   DateTime convertedDate = DateTime.MinValue;   if (jdeDate >= 30001 && jdeDate <= 200000)   {     short yearValue = (short)(jdeDate / 1000d + 1900d);     short dayValue = (short)((jdeDate % 1000) - 1);     convertedDate = DateTime.Parse("01/01/" + yearValue.ToString()).AddDays(dayValue);   }   else   {     throw new ArgumentException("The value provided does not represent a valid JDE date", "jdeDate");   }   return convertedDate; }  I'd love to take credit for this myself, but this is an adaptation of a TSQL UDF that I got from another consultant at the client site.

    Read the article

  • WYSIWYG editor for structured text (suitable for SVN versioning)

    - by chris_l
    I'm looking for an open source cross platform WYSIWYG editor that I can use to write documentation. I'm not looking for a web based solution - i.e. it should work without a web server, and I want to save my files directly to disk. The result could be any structured format, like Wiki markup, ReStructuredText, DocBook, or a small subset of HTML, ... But it's important, that Subversion diff can be used to see differences between the versions easily (this wouldn't work with .odt or .rtf files for example) I'm currently thinking about using Open Office, and saving the files as HTML, but is there a better solution?

    Read the article

  • Is there a remote file transfer command that preserves nanosecond timestamps?

    - by Denver Gingerich
    I've tried transferring files using scp and rsync on Ubuntu 10.04, but neither of them preserves more than second precision. Here's an example: $ touch test1 $ scp -p test1 localhost:test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.000000000 -0500 test2 $ cp -p test1 test2 $ ls -l --full-time test* -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test1 -rw-r--r-- 1 user user 0 2011-01-14 18:46:06.579717282 -0500 test2 $ A straight copy works fine, but scp truncates the timestamp. Are there any tools (preferably similar to scp or rsync in their usage) that do remote file transfers while preserving nanosecond timestamps? I could write a hacky script to do it, but I'd rather not.

    Read the article

  • How do partitions help in optimizing the harddrive?

    - by Fasih Khatib
    I was recently reading a guide on Tom's Hardware about how to optimize the harddrive. They listed creating partitions as one of them. They said keeping the various files seperate is a good idea as it reduces the read/write cycles required. Now my querry is: what size partitions do i make for my 500gb harddrive. Its completely blank. I will be installing WIN7 in it. My usual strategy is to divide it into two equal partitions. Is it the optimum size?

    Read the article

  • MAC computer is not seeing the Ubuntu(computer) samba share in SHare

    - by Mirage
    I have ubuntu with samba installed. Initially My Windows were not able to see Ubuntu on my network list. After searchinga lot i found that i had to write this line in smb.conf and it worked "ldap ssl = No" Don't know why. Now my MAC is also not able to see ubunut but if click on connect to server and use smb://servername then my connection is established. Now is there any thing which i can do so that MAC can see ubuntu in share and i don't need to use connect to server thing.

    Read the article

  • Tutorial for Quick Look Generator for Mac

    - by vgm64
    I've checked out Apple's Quick Look Programming Guide: Introduction to Quick Look page in the Mac Dev Center, but as a more of a science programmer rather than an Apple programmer, it is a little over my head (but I could get through it in a weekend if I bash my head against it long enough). Does anyone know of a good basic Quick Look Generators tutorial that is simple enough for someone with only very modest experience with Xcode? For those that are curious, I have a filetype called .evt that has an xml header and then binary info after the header. I'm trying to write a generator to display the xml header. There's no application bundle that it belongs to. Thanks!

    Read the article

  • Migrating data from SQL Server 2000 to SQL Server 2005

    - by Muhammad Kashif Nadeem
    I have to migrate existing data which is in SQL Server 2000 to SQL Server 2005. The schema of two databases is different. For Example Locations table in SS2000 is split into two tables and has different columns. This is one time activity. After successful migration I don't need old db anymore. What is the best way to transfer data from one SQL Server to another having different schemas? I can write stored procedures to fetch data from SQL Server 2000 and insert/update tables in SQL Server 2005. What about SSIS? I don't have any experience with this and is this better to create package of SSIS because I don't need this again and need to learn it first. Thanks.

    Read the article

  • openldap search acl

    - by Patrick
    I'm trying to write an access control for OpenLDAP to allow a user to search with a certain base dn, but only get results back from certain sub dn's. I've played with lots of different rules but cant get it to work. I'm not sure its even possible. For example: I have the user with the dn uid=testuser,ou=people,dc=example,dc=com. I want this user to be able to search with a base of dc=example,dc=com and get back entries in ou=people,dc=example,dc=com. There are lots of other sub OUs under dc=example,dc=com, but only entries in ou=people should be returned (for bonus, I'd only like certain attributes to be returned as well). Can this be done?

    Read the article

  • "Integratable" but not "integrated" GPL

    - by mgibsonbr
    There has been much debate over whether or not merely linking to a piece of code makes it a derivative work. I know FSF says "yes", so according to them I can't dynamically link a non-GPL compatible program to a GPL library and distribute the whole. But I could do that for private use, as long as no code is released to the public. That made me wonder: what if I don't redistribute the GPL code at all? If my program can work alone (reinforcing my claim that it's not a derivative work), but can do more if the GPL library is also installed to the system, couldn't I just release my application under my own licensing terms - without including any GPL code - and post instructions for anyone interested to separately download the GPL code and do the integration "for their private use"? I know it's against the "spirit" of the GPL, so I'm not suggesting it's a good idea to do that. However, this question is bugging me for some time, specially because of the implications of each answer: If I can not do that: can I write another library with a similar API? (before answering "of course you can", remember that having the same API would allow both libraries to be swapped at will by my customers - so I don't need to work too hard on my library or even make it "working". How to determine if a similar program is just similar or is a circumvention attempt?) If I can do that: can I also be paid to perform the service of installing the GPL library for a customer? (I sell them my program, install it in their machines, download and install the GPL library too) can I put the two programs in the same website? In two different CDs? (I know I said the idea was not to redistribute the GPL code, I'm just thinking in excuses people could use to claim they're not redistributing even though they are)

    Read the article

  • Samsung 830 very slow benchmark numbers

    - by alekop
    I just bought a new SSD, and installed a fresh copy of Windows on it. I didn't see any noticeable difference in boot times, app start-up times, so I decided to benchmark it. Asus P7P55D-E Intel i5-760 Samsung 830 256GB SATA III Windows 7 Ultimate 64-bit The Windows experience index gave the drive a 7.3 rating, but real-world performance is not particularly impressive. Any ideas why the numbers are so low? UPDATE: It turns out that SATA III support is turned off by default on the P7P55D motherboard. After enabling it in BIOS (Tools - Level Up), the scores went up: Read Write Seq 325 183 4K 16 49 IOPS 32K 28K It's an improvement, but still far below what they should be for this drive.

    Read the article

  • HealthSouth Upgrades to Oracle Database 11g Release 2 and Oracle RAC

    - by jenny.gelhausen
    HealthSouth Corporation, the nation's largest provider of inpatient rehabilitation services, has upgraded to Oracle Database 11g Release 2 underneath PeopleSoft Enterprise Human Capital Management. Additionally, HealthSouth improved the availability and performance of its Oracle PeopleSoft Enterprise applications and Enterprise Data Warehouse using Oracle Database 11g and Oracle Real Application Clusters. Oracle Database options -- Oracle Advanced Compression and Oracle Partitioning are key to HealthSouth's data lifecycle management practices and to utilizing storage systems more efficiently. Using compression on both partitioned as well as non-partitioned tables in its data warehouse, HealthSouth has seen a 4X storage reduction without any cost to performance. "Oracle Database 11g, along with Oracle Real Application Clusters, Advanced Compression and Partitioning, all lend themselves to delivering highly available, performant data warehousing," said Henry Lovoy, Data Manager, HealthSouth Corporation. Press Release var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Windows Recovery Console - forgot password

    - by Jason
    I upgraded to Windows XP SP3, which immediately "broke" the laptop - it never booted with SP3 on it. I put in the Windows XP install disk I had originally used to set up the laptop, and it ran for a while, then said there's no hard disk present, so it can't continue. BIOS still sees the hard disk. I put the hard disk in an external USB case, and I can read/write to it with the other laptop. I then put the hard disk back in it's laptop, restarted with the Windows CD, and tried to get into the Recovery Console, but I forgot the password and can't "log on" to the drive. I'd also like to know if I can fix the broken files (which ones?) from the other laptop (via USB), and if I can "log on" to an external disk with the Recovery Console. (Also, the data won't fit on my other laptop, and I don't have all the install CDs for software on the disk.) Any help appreciated.

    Read the article

  • TDD/Tests too much an overhead/maintenance burden?

    - by MeshMan
    So you've heard it many times from those who do not truly understand the values of testing. Just to start things out, I'm a follower of Agile and Testing... I recently had a discussion about performing TDD on a product re-write where the current team does not practice unit testing on any level, and probably have never heard of the dependency injection technique or test patterns/design etc (we won't even get on to clean code). Now, I am fully responsible for the rewrite of this product and I'm told that attempting it in the fashion of TDD, will merely make it a maintenance nightmare and impossible for the team maintain. Furthermore, as it's a front-end application (not web-based), adding tests is pointless, as the business drive changes (by changes they mean improvements of course), the tests will become out of date, other developers who come on to the project in the future will not maintain them and become more of a burden for them to fix etc. I can understand that TDD in a team that does not currently hold any testing experience doesn't sound good, but my argument in this case is that I can teach my practice to those around me, but further more, I know that TDD makes BETTER software. Even if I was to produce the software using TDD, and throw all the tests away on handing it over to a maintenance team, it surely would be a better approach than not using TDD at all from the start? I've been shot down as I've mentioned doing TDD on most projects for a team that have never heard of it. The thought of "interfaces" and strange looking DI constructors scares them off... Can anyone please help me in what is normally a very short conversation of trying to sell TDD and my approach to people? I usually have a very short window of argument before falling at the knees to the company/team.

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • Partner Infoline & Service Portal

    - by uwes
    As an EMEA-wide team we're supporting the daily work of our partners. Our team consists of 24 sales consultants, one third is specialized on the Partner Infoline. Partner Infoline's main focus is to deliver actively and reactively technical pre sales knowledge about the Oracle hardware portfolio to our partners.With infoline we assist our partners in their daily work, furthermore we help to educate our partners to be self sufficient in all aspects and questions about hardware configurations and hardware quotes. For our Infoline Service we use a ticketing system called Service Portal which is widely used within Oracle and delivers a good and stable functionality and availability. Our Infoline-Service provides answers to questions concerning technical pre-sales matters that are related to hardware and the corresponding hardware related software.* You can address these types of questions by sending them to our mailing list: [email protected] The serviceportal will send you an auto-reply including a unique reference number, which will be the identification for your request until it is closed. Depending on the complexity of the request, it might be necessary to forward it to our specialists (servers, storage, tape, Solaris etc.) located whole over Europe. In order to make the whole process smooth here are some recommendations: write your request in English; saves translation-time, when it has to be forwarded to the specialists stating clearly in the title your interest area, like for example "memory in M4000 server". one request/one subject; makes it easier to maintain and keep the correspondence clear and simple. The rule of the service is to provide an answer quick, which means the vast majority of the requests are answered within a couple of hours. However please keep in mind that some requests may need extra work by involving the appropriate person within Europe or even in US. Therefore there is no official SLA for this service. * This excludes Oracle "classic" products and post-sales support. The latter should still be addressed through MOS (http://support.oracle.com)

    Read the article

  • Backing up VM data to host drive on Windows 7

    - by malcolms
    Hi, I have created a VM for Virtual PC in windows 7. I am writing a batch file to backup data in the VM to a host USB drive. I have shared the host drives. I have a USB drive that I want to backup to. But how do refer to the USB drive in the batch file. I cannot seem to map a drive to it, It is called "H on Malcolm-Desktop" in windows explorer. This is what I have tried. XCOPY C:\Inetpub\wwwroot "\\H on Malcolm-Desktop\HALII_VHD_Backup\DataBackup\Inetpub\wwwroot" /S /E /Y /D How do I write this command? Malcolm

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >