Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 439/906 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • syslog facilities

    - by user65971
    Hi, I have an application (in java) running in a windows PC and I want to send logging messages to a syslog server running in a linux box somewhere in the network. The problem I have is, that it is not clear to me what should I use as facility in this case. I.e. can I (/should I) send the logging info as one of LOCAL0-LOCAL7? Or are they not supposed to be used by remote applications? It is not clear to me if they are usable or not. Should I use USER instead? Could anyone help me on this please? Thank you

    Read the article

  • Cannot dump svn repository

    - by vinga
    I've a problem with my svn repo. I cannot use it, I even cannot dump it. svnadmin verify repo returns Can't set position pointer in file 'svn/db/revs/0/0' When I try to dump repo (no matter what revision range), console output shows: * Dumped revision 0. svnadmin: Final line in revision file missing space I've googled that this may be connected with wrong version apr apache2 library, but I have other repositories which work good, so I thing this isn't the case. Is there any way to save at least some files from my repo? Can svn repo get corrupted so easily (probably after power-cut, however I'm not sure).

    Read the article

  • Bacula configuration for clients that are turned on and off randomly

    - by Rastloser
    I'm evaluating Bacula as a centralized backup tool for a small network where users will turn machines on and off unpredictably. Some of the headless Linux boxes I need to back up are intended to be turned off by pressing the on/off-button on the case, without any way of telling the user to wait for a backup job to finish. So, we don't know when backup jobs may run (anacron might help with this, right?) and we don't know whether they'll be allowed to finish. Is Bacula a reasonable choice for such an environment?

    Read the article

  • How do I block requests to Apache on a network interface?

    - by Dmitry Dulepov
    The problem: I have a local Apache instance on my Macbook Pro. I need it to listen on all network interfaces except en0 and en1 (basically, listen on lo and vnicX from Parallels). I know about "Listen *:80" but this is not a solution in this particular case. The only thing I could imagine if to use OS X firewall to block incoming requests to Apache on those interfaces. But I could not find any working examples and could not make such rules myself. Could somebody help, please?

    Read the article

  • Computer loads BIOS, but won't load OS

    - by LEGEND383
    I have just purchased a brand new motherboard bundle, installed into the old case with the old PSU and the old HDD. I can get into the BIOS, but whenever I try to boot from the HDD, it just sits with the fans going, but nothing is displayed. The moniter works (tested with another machine), and I'd hope there are no problems with the motherboard, CPU or RAM because as I said I only brought them today. The only things I can think of are: The PSU's motherboard connector is a 20-pin, and the motherboard has a 24-pin connector (not a problem with the previous board) The OS is not supported (doesn't seem likly to me, but possible I guess) Here is my system configuration: Motherboard: ASUS F1A55-M le CPU: AMD APU A6 3500 HDD: 1Tb SATA RAM: 4Gb DDR3 OS: Ubuntu Satanic v666.9 PSU: Winpower ATX-400 (this thing is REALLY old) If anyone is able to offer a reason as to why this is not working, or a possible solution, it would be greatly appriciated.

    Read the article

  • Why using Fragments?

    - by ahmed_khan_89
    I have read the documentation and some other questions' threads about this topic and I don't really feel convinced; I don't see clearly the limits of use of this technique. Fragments are now seen as a Best Practice; every Activity should be basically a support for one or more Fragments and not call a layout directly. Fragments are created in order to: allow the Activity to use many fragments, to change between them, to reuse these units... == the Fragment is totally dependent to the Context of an activity , so if I need something generic that I can reuse and handle in many Activities, I can create my own custom layouts or Views ... I will not care about this additional Complexity Developing Layer that fragments would add. a better handling to different resolution == OK for tablets/phones in case of long process that we can show two (or more) fragments in the same Activity in Tablets, and one by one in phones. But why would I use fragments always ? handling callbacks to navigate between Fragments (i.e: if the user is Logged-in I show a fragment else I show another fragment). === Just try to see how many bugs facebook SDK Log-in have because of this, to understand that it is really (?) ... considering that an Android Application is based on Activities... Adding another life cycles in the Activity would be better to design an Application... I mean the modules, the scenarios, the data management and the connectivity would be better designed, in that way. === This is an answer of someone who's used to see the Android SDK and Android Framework with a Fragments vision. I don't think it's wrong, but I am not sure it will give good results... And it is really abstract... ==== Why would I complicate my life, coding more, in using them always? else, why is it a best practice if it's just a tool for some cases? what are these cases?

    Read the article

  • How to handle the fear of future licensing issues of third-party products in software development?

    - by Ian Pugsley
    The company I work for recently purchased some third party libraries from a very well-known, established vendor. There is some fear among management that the possibility exists that our license to use the software could be revoked somehow. The example I'm hearing is of something like a patent issue; i.e. the company we purchased the libraries from could be sued and legally lose the ability to distribute and provide the libraries. The big fear is that we get some sort of notice that we have to cease usage of the libraries entirely, and have some small time period to do so. As a result of this fear, our ability to use these libraries (which the company has spent money on...) is being limited, at the cost of many hours worth of development time. Specifically, we're having to develop lots of the features that the library already incorporates. Should we be limiting ourselves in this way? Is it possible for the perpetual license granted to us by the third party to be revoked in the case of something like a patent issue, and are there any examples of something like this happening? Most importantly, if this is something to legitimately be concerned about, how do people ever go about taking advantage third-party software while preparing for the possibility of losing that capability entirely? P.S. - I understand that this will venture into legal knowledge, and that none of the answers provided can be construed as legal advice in any fashion.

    Read the article

  • How to inspect/modify Windows Firewall rules while the Windows Firewall/ICS service is stopped and disabled?

    - by Kal
    I'm trying to fix up my friend's remote Windows Server 2003 R2 machine. I have Remote Desktop access at the moment. However, I notice that Windows Firewall/Internet Connection Sharing service of the remote machine is disabled, which seems to be a bad idea. If I enable and start the service now, I may lose my Remote Desktop access in case the exception rule for Remode Desktop has not been defined in Windows Firewall. So I need a way to inspect and modify exception rules even as the Windows Firewall/ICS service is stopped and disabled. Does anybody know how?

    Read the article

  • How are the conceptual pairs Abstract/Concrete, Generic/Specific, and Complex/Simple related to one another in software architecture?

    - by tjb1982
    (= 2 (+ 1 1)) take the above. The requirement of the '=' predicate is that its arguments be comparable. Any two structures are comparable in this case, and so the contract/requirement is pretty generic. The '+' predicate requires that its arguments be numbers. That's more specific. (socket domain type protocol) the arguments here are much more specific (even though the arguments are still just numbers and the function itself returns a file descriptor, which is itself an int), but the arguments are more abstract, and the implementation is built up from other functions whose abstractions are less abstract, which are themselves built from less and less abstract abstractions. To the point where the requirements are something like move from one location to another, observe whether the switch at that location is on or off, turn the switch on or off, or leave it the same, etc. But are functions also less and less complex the less abstract they are? And is there a relationship between the number and range of arguments of a function and the complexity of its implementation, as you go from more abstract to less abstract, and vice versa? (= 2 (+ 1 1) 2r10) the '=' predicate is more generic than the '+' predicate, and thus could be more complex in its implementation. The '+' predicate's contract is less generic, and so could be less complex in its implementation. Is this even a little correct? What about the 'socket' function? Each of those arguments is a number of some kind. What they represent, though, is something more elaborate. It also returns a number (just like the others do), which is also a representation of something conceptually much more elaborate than a number. To boil it down, I'm asking if there is a relationship between the following dimensions, and why: Abstract/Concrete Complex/Simple Generic/Specific And more specifically, do different configurations of these dimensions have a specific, measurable impact on the number and range of the arguments (i.e., the contract) of a function?

    Read the article

  • Replace values in column, space delimited file in Vim

    - by user1256923
    I have a file that looks like: 2067 24311 <hkxhk> {00} 2069 17219 <hkxhk> {00} 2071 20931 <hkxhk> {00} 2073 5557 <hkxhk> {00} 2075 2127 <hkxhk> {00} 2077 20947 <hkxhk> {00} 2081 18088 <hkxhk> {00} I want to replace the first column value so that it looks like 5 24311 <hkxhk> {00} 5 17219 <hkxhk> {00} 5 20931 <hkxhk> {00} 5 5557 <hkxhk> {00} 5 2127 <hkxhk> {00} 5 20947 <hkxhk> {00} 5 18088 <hkxhk> {00} Where the first space delimited column has been replaced by a new value, in this case 5.

    Read the article

  • In OpenOffice Spreadsheet, how can I set the default Date format?

    - by Joe Casadonte
    I'm using OO 3.1.1 on Ubuntu 9.10 (in case that matters to the answer). I like my dates to appear as YYYY-MM-DD. I can't think of a time when I want to see a date in any other format, so I'm constantly changing how dates look. That's manageable, though annoying. What's gotten me to the point of posting is that when I edit a cell with a date value, I have to edit it in the format MM/DD/YYYY, which is really, really annoying, as I'm usually mucking with the day (or possibly the month), and very seldom the year. So there's lots of cursor or mouse use, wasting my time. So is there a way that I can change how dates are edited, or at least the default display format? Thanks!

    Read the article

  • How many xml http requests is too much for a pc to handle?

    - by Uri
    I'm running mediawiki on an apache on a regular pc running vista (don't know the specific specs, but I'm guessing at least duo core 2 2 giga hertz processor, broadband connection (500 kb/s at least, probably 1 mega). I want to use the MediaWiki api to send a lot of requests to this server. Most of the time the requests will be sent through LAN (but sometimes through the internet). I'm talking thousands of requests every few seconds at worst case. (A lot of these requests may repeat themselves, I guess some sort of cache would help) Will the server handle this, or do I need a stronger/dedicated computer? (I'm not looking for specific yes/no, but just want to get an idea as to what configuration of computer will support how many request per second) Thanks

    Read the article

  • setreg.exe for Windows 7

    - by victoriah
    I want to use setreg.exe (http://msdn.microsoft.com/en-us/library/aa387700(VS.85).aspx) to disable the certificate revocation check. However, it's based on an older version of .NET than what I have. Microsoft says it's shipped with the older .net SDK, and when I download that, try to install, it says something like 'can't install SDK without .net 1.1'. In the article linked it says for newer versions I should use SignTool, but SignTool does not appear to have the function I need . Is it possible to either - a) find a tool that can perform the function I need that doesn't need me to install the older .NET b) get setreg.exe without downloading the SDK or do I need to install the older .NET on my machine? And if the latter is the case, what do I need to do to install an older .NET? Will it overwrite my current .NET? Thanks,

    Read the article

  • How to control routes added by RasDial

    - by Robert Dodier
    I am using the RasDial function on a Windows box (Windows Server 2008) to dial a device from which the server then reads data. It seems that some new routes are added to the network routing table when the dial-up connection is made. That interferes with other network interfaces on the server. In particular, RasDial adds a default route which routes traffic to the device, which makes the server unreachable until the connection is dropped. Is there a way to control which routes are added by RasDial? I have been studying Microsoft's document for RasDial and associated items (RASDIALPARAMS, RASDIALEXTENSIONS) without finding anything about routing. There is an option for "Use default gateway on remote network" when configuring a VPN, but I don't see how to apply that in this case. Thanks for any light you can shed on this problem.

    Read the article

  • Route using certain IP address

    - by spa
    I have a server with two public IPs. Both IPs are added to eth0 using ip addr add. Now I'd like to contact a server which uses IP address filtering. Only requests are allowed which use the second IP address. Is there are way to set this up using the standard route command in Linux? I guess that's not the case. So the only solution I see right now: Setup a virtual device let's say eth0:0 and bind the second IP address to it. Then I can reference the device in the route command. Edit: I can't use the second IP as primary one easily as this IP is used as failover IP.

    Read the article

  • How to match responses from a server with their corresponding requests? [closed]

    - by Deele
    There is a server that responds to requests on a socket. The client has functions to emit requests and functions to handle responses from the server. The problem is that the request sending function and the response handling function are two unrelated functions. Given a server response X, how can I know whether it's a response to request X or some other request Y? I would like to make a construct that would ensure that response X is definitely the answer to request X and also to make a function requestX() that returns response X and not some other response Y. This question is mostly about the general programming approach and not about any specific language construct. Preferably, though, the answer would involve Ruby, TCP sockets, and PHP. My code so far: require 'socket' class TheConnection def initialize(config) @config = config end def send(s) toConsole("--> #{s}") @conn.send "#{s}\n", 0 end def connect() # Connect to the server begin @conn = TCPSocket.open(@config['server'], @config['port']) rescue Interrupt rescue Exception => detail toConsole('Exception: ' + detail.message()) print detail.backtrace.join('\n') retry end end def getSpecificAnswer(input) send "GET #{input}" end def handle_server_input(s) case s.strip when /^Hello. (.*)$/i toConsole "[ Server says hello ]" send "Hello to you too! #{$1}" else toConsole(s) end end def main_loop() while true ready = select([@conn, $stdin], nil, nil, nil) next if !ready for s in ready[0] if s == $stdin then return if $stdin.eof s = $stdin.gets send s elsif s == @conn then return if @conn.eof s = @conn.gets handle_server_input(s) end end end end def toConsole(msg) t = Time.new puts t.strftime("[%H:%M:%S]") + ' ' + msg end end @config = Hash[ 'server'=>'test.server.com', 'port'=>'2020' ] $conn = TheConnection.new(@config) $conn.connect() $conn.getSpecificAnswer('itemsX') begin $conn.main_loop() rescue Interrupt rescue Exception => detail $conn.toConsole('Exception: ' + detail.message()) print detail.backtrace.join('\n') retry end

    Read the article

  • SQL Server 2005 Replication Subscription Expiring Warning

    - by Aaron
    This week one of my replication subscriptions expired because I wasn't getting any alerts saying that there was a login error (I've fixed those alerts and the error). What I'd like now is, in the case that this happens again, to be able to send an alert saying that a subscription is about to expire (ie, it will expire in 1 or 2 days). I have an alert set up for when a subscription expires, but this is after the fact. I've looked through sys.messages for any text that has "Expir" in it, but I haven't found an appropriate error code yet. Would anyone be able to point me in the right direction? Thanks.

    Read the article

  • connect to my database from another computer

    - by user3482102
    Sorry for being a :noob I am a student working on a dbms project on my laptop. I have installed mariadb and I have root access. Similarly the case with my friend's laptop. The problem is we both want to work on same database collectively as we are partners of a team in the project. How can I create a database in mariadb that we both can share? How to access that database? Please specify the software to use and implementation.

    Read the article

  • Network(ing) to the Limit

    - by Oracle OpenWorld Blog Team
     By Karen Shamban While Oracle OpenWorld attendees are networking, there's an Oracle Global IT team that builds and maintains the massive networks that help run the show. The objective? To keep things running as seamlessly and smoothly as possible, constantly evaluate priorities, mitigate risk, and be ready for whatever might happen -- because things do happen when there are 50,000 plus attendees, tens of thousands of devices, unexpected requirements, and a constant flow of up-to-the-minute information. Here's just some of what it takes to keep the conference going, network style: 100 Oracle network, voice, and desktop engineers; security, risk management, and other IT experts, who come in from 17 countries  1000+ network switches 300+ miles of copper and fiber 485 wireless access points 2,500 wired laptops 300 VoIP phones And just where are all these networks and devices deployed? This is what the team had to build and manage: Moscone North, South, and West, including: The keynote hall Oracle DEMOgrounds in the Exhibition Halls Hundreds of session rooms Connection Centers, Social Avenue, Lounges Registration The Howard Street Tent and Taylor Street Cafe tented venues Oracle Square (Union Square) Yerba Buena Gardens Masonic Auditorium Sessions and demos at 8 hotel venues That's a whole lot of networking going on. And here's the kicker: the team has only 4 days to bring get it all up and running across these many venues, and exactly 12 hours to take it all down once the show ends. The Global IT team puts in the equivalent of 152 24-hour days for set-up, 227 24-hour days of support during the conferences, and then tears it all down in about 20 24-hour days. And in case you were wondering, the planning for next year's Oracle OpenWorld starts ... next week. No rest for the weary.  Now THAT's networking!  So hats off to the Global IT team -- the job ain't easy, but somebody's got to do it, and they do it remarkably well.

    Read the article

  • iPhone and Mac Twitter App Not Parsing HTML in Feeds

    - by otakrosak
    I've come across this problem where my tweets on the iPhone and Mac Twitter app are not parsing the HTML properly. In any browser, the HTML is parsed fine. It only happens to the apostrophe character (') and on the iPhone and Mac Twitter app. Also, I'm using the dlvr.it service to push my Drupal blog posts to my Twitter feed. My initial guess was that it was the RSS generated by Drupal, but if that were the case, then the feed displayed on the browser would be affected too. Any ideas anyone? Any help would be very much appreciated. P.S: I apologize beforehand if this question already exists. I did search for it but based on what I queried, nothing came up.

    Read the article

  • Linux/Solaris replace hostnames in files according to hostname rule

    - by yael
    According to the following Perl command ( this command part of ksh script ) I can replaced old hostnames with new hostnames in Linux or Solaris previos_machine_name=linux1a new_machine_name=Red_Hat_linux1a export previos_machine_name export new_machine_name . perl -i -pe 'next if /^ *#/; s/(\b|[[:^alnum:]])$ENV{previos_machine_name}(\b|[[:^alnum:]])/$1$ENV{new_machine_name}$2/g' file EXPLAIN: according to perl command - we not replaces hostnames on the follwoing case: RULE: [NUMBERS]||[letter]HOSTNAME[NUMBERS]||[letter] my question after I used the Perl command in order to replace all old hostnames with new hostnames based on the "RULE" in the Perl command how to verify that the old hostnames not exist in file ? for example previos_machine_name=linux1a new_machine_name=Red_Hat_linux1a more file AAARed_Hat_linux1a verification should be ignore from this line @Red_Hat_linux1a$ verification should be match this line P=Red_Hat_linux1a verification should be match this line XXXRed_Hat_linux1aZZZ verification should be ignore from this line . . . .

    Read the article

  • Move entire OS from NTFS drive to bigger ext4 drive.

    - by pangel
    According to SMART data, the hard drive I curently use is about to fail. I bought a new, bigger drive to copy the system to a safer place. The old drive is 160GB. Ubuntu was installed with Wubi, and the partition is NTFS. There are a few other partitions around (recovery partition, swap...) that I don't care about. The new drive is 320GB. I would like the new system to run on ext4, not on NTFS. I looked at solutions that use dd, or clonezilla, but it seems that moving to a different filesystem prevents me from using them. I considered installing a brand new ubuntu on the new hard drive and then copy /home from the old drive to the new drive, but I heard that there would be file permission problems. I would also have to reinstall all my software. One last thing: the NTFS drive has dead sectors. I don't know how this can influence the copy process, but I mention it just in case. edit: I do not care about the windows partition. I just want Ubuntu to make the transition.

    Read the article

  • process ksoftirqd consumes permanent 15% CPU load [closed]

    - by markus
    Possible Duplicate: Anyone else experiencing high rates of Linux server crashes during a leap second day? The process ksoftirqd/0 uses permanent 15% CPU on our debian squeeze server. 4 root 20 0 0 0 0 R 15.0 0.0 850:59.17 ksoftirqd/0 I already read that this can have various reason like Full harddisk or high network traffic. In our case we do have more or less low network traffic and enough space on hard disk. How can I analyse what causes ksoftirqd/0 to use permanently 15% CPU?

    Read the article

  • Apply rewrite rule for all but all the files (recursive) in a subdirectory?

    - by user784637
    I have an .htaccess file in the root of the website that looks like this RewriteRule ^some-blog-post-title/ http://website/read/flowers/a-new-title-for-this-post/ [R=301,L] RewriteRule ^some-blog-post-title2/ http://website/read/flowers/a-new-title-for-this-post2/ [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ## Redirects for all pages except for files in wp-content to website/read RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !/wp-content RewriteRule ^(.*)$ http://website/read/$1 [L,QSA] #RewriteRule ^http://website/read [R=301,L] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> My intent is to redirect people to the new blog post location if they propose one of those special blog posts. If that's not the case then they should be redirected to http://website.com/read. Nothing from http://website.com/wp-content/* should be redirected. So far conditions 1 and 3 are being met. How can I meet condition 2?

    Read the article

  • What's the best way to copy deduplicated files onto a new Server 2012 drive?

    - by Screndib
    We have a deduplicated volume on a Windows Server 2012 machine that is approaching it's limits. It is a 1.3TB drive with ~10TB of duplicated data. We want to copy all of this data onto a larger 4TB drive. What is the best way to perform this copy such that we only copy the 1.3TB of deduplicated data instead of unpacking the entire 10TB and repacking it on the other end? edit: I attempted a standard explorer file copy and a Copy-Item but neither appeared to be dedup-aware. I didn't run either to completion however so I can't say this is the case for sure.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >