Search Results

Search found 1430 results on 58 pages for 'risk assesment'.

Page 40/58 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Oracle Linux Partner Pavilion Spotlight - Part II

    - by Ted Davis
    As we draw closer to the first day of Oracle OpenWorld, starting in less than a week, we continue to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033). We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. In today's post we highlight three additional Oracle Linux / Oracle VM Partners from the pavilion. Micro Focus delivers mainframe solutions software and software delivery tools with its Borland products. These tools are grouped under the following solutions: Analysis and testing tools for JDeveloper Micro Focus Enterprise Analyzer is key to the success of application overhaul and modernization strategies by ensuring that they are based on a solid knowledge foundation. It reveals the reality of enterprise application portfolios and the detailed constructs of business applications. COBOL for Oracle Database, Oracle Linux, and Tuxedo Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment. Itbrings the productivity of the Eclipse IDE to COBOL, and provides the ability to deploy key business critical COBOL applications to Oracle Linux both natively and under a JVM. Migration and Modernization tooling for mainframes Enterprise application knowledge, development, test and workload re-hosting tools significantly improves the efficiency of business application delivery, enabling CIOs and IT leaders to modernize application portfolios and target platforms such as Oracle Linux. When it comes to Oracle Linux database environments, supporting high transaction rates with minimal response times is no longer just a goal. It’s a strategic imperative. The “data deluge” is impacting the ability of databases and other strategic applications to access data and provide real-time analytics and reporting. As such, customer demand for accelerated application performance is increasing. Visit LSI at the Oracle Linux Pavilion, #733, to find out how LSI Nytro Application Acceleration products are designed from the ground up for database acceleration. Our intelligent solid-state storage solutions help to eliminate I/O bottlenecks, increase throughput and enable Oracle customers achieve the highest levels of DB performance. Accelerate Your Exadata Success With Teleran. Teleran’s software solutions for Oracle Exadata and Oracle Database reduce the cost, time and effort of migrating and consolidating applications on Exadata. In addition Teleran delivers visibility and control solutions for BI/data warehouse performance and user management that ensure service levels and cost efficiency.Teleran will demonstrate these solutions at the Oracle Open World Linux Pavilion: Consolidation Accelerator - Reduces the cost, time and risk ofof migrating and consolidation applications on Exadata. Application Readiness – Identifies legacy application performance enhancements needed to take advantage of Exadata performance features Workload Accelerator – Identifies and clusters workloads for faster performance on Exadata Application Visibility and Control - Improves performance, user productivity, and alignment to business objectives while reducing support and resource costs. Thanks for reading today's Partner Spotlight. Three more partners will be highlighted tomorrow. If you missed our first Partner Spotlight check it out here.

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Code Design question, circular reference across classes?

    - by dsollen
    I have no code here, as this is more of a design question (I assume this is still the best place to ask it). I have a very simple server in java which stores a mapping between certain values and UUID which are to be used by many systems across multiple platforms. It accepts a connection from a client and creates a clientSocket which stores the socket and all the other relevant data unique to that connection. Each clientSocket will run in their own thread and will block on the socket waiting for a read. I expect very little strain on this system, it will rarely get called, but when it does get a call it will need to respond quickly and due to the risk of it having a peak time with multiple calls coming in at once threaded is still better. Each thread has a reference to a Mapper class which stores the mapping of UUID which it's reporting to others (with proper synchronization of course). This all works until I have to add a new UUID to the list. When this happens I want to report to all clients that care about that particular UUID that a new one was added. I can't multicast (limitation of the system I'm running on) so I'm having each socket send the message to the client through the established socket. However, since each thread only knows about the socket it's waiting on I didn't have a clear method of looking up every thread/socket that cares about the data to inform them of the new UUID. Polling is out mostly because it seems a little too convoluted to try to maintain a list of newly added UUID. My solution as of now is to have the 'parent' class which creates the mapper class and spawns all the threads pass itself as an argument to the mapper. Then when the mapper creates a new UUID it can make a call to the parent class telling it to send out updates to all the other sockets that care about the change. I'm concerned that this may be a bad design due to the use of a circular reference; parent has a reference to mapper (to pass it to new ClientSocket threads) and mapper points to parent. It doesn't really feel like a bad design to me but I wanted to check since circular references are suppose to be bad. Note: I realize this means that the thread associated with whatever socket originally received the request that spawned the creation of a UUID is going to pay the 'cost' of outputting to all the other clients that care about the new UUID. I don't care about this; as I said I suspect the client to receive only intermittent messages. It's unlikely for one socket to receive multiple messages at one time, and there won't be that many sockets so it shouldn't take too long to send messages to each of them. Perhaps later I'll fix the fact that I'm saddling higher work load on whatever unfortunate thread gets the first request; but for now I think it's fine.

    Read the article

  • WCF Routing Service Filter Generator

    - by Michael Stephenson
    Recently I've been working with the WCF routing service and in our case we were simply routing based on the SOAP Action. This is a pretty good approach for a standard redirection of the message when all messages matching a SOAP Action will go to the same endpoint. Using the SOAP Action also lets you be specific about which methods you expose via the router. One of the things which was a pain was the number of routing rules I needed to create because we were routing for a lot of different methods. I could have explored the option of using a regular expression to match the message to its routing but I wanted to be very specific about what's routed and not risk exposing methods I shouldn't via the router. I decided to put together a little spreadsheet so that I can generate part of the configuration I would need to put in the configuration file rather than have to type this by hand. To show how this works download the spreadsheet from the following url: https://s3.amazonaws.com/CSCBlogSamples/WCF+Routing+Generator.xlsx In the spreadsheet you will see that the squares in green are the ones which you need to amend. In the below picture you can see that you specify a prefix and suffix for the filter name. The core namespace from the web service your generating routing rules for and the WCF endpoint name which you want to route to. In column A you will see the green cells where you add the list of method names which you want to include routing rules for. The spreadsheet will workout what the full SOAP Action would be then the name you will use for that filter in your WCF Routing filters. In column D the spreadsheet will have generated the XML snippet which you can add to the routing filters section in your configuration file. In column E the spreadsheet will have created the XML snippet which you can add to the routing table to send messages matching each filter to the appropriate WCF client endpoint to forward the message to the required destination. Hopefully you can see that with this spreadsheet it would be very easy to produce accurate XML for the WCF Routing configuration if you had a large number of routing rules. If you had additional methods in other services you can simply copy the worksheet and add multiple copies to the Excel workbook. One worksheet per service would work well.

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • Response: Agile's Second Chasm

    - by Malcolm Anderson
    William Pietri over at Agile Focus has written an interesting article entitled, "Agile’s Second Chasm (and how we fell in)" in which he talks about how agile development has fallen into a common trap where large companies are now spending a lot of money hiring agile (Scrum) consultants just so that they can say they are agile, but all the while avoiding any change that is required by Scrum.   It echoes the questions that I've been asking for a while, "Can a fortune 500 company actually do agile development?"  I'm starting to think that the answer is "usually not"   William ask 3 questions at the end of his article that I will answer here.   1) Have I seen agile development brought in and then preemptively customized (read: made into ScrummerFall)?   Yes, Scrum is hard and disruptive.  It's a spotlight on company dysfunction.  In a low trust environment like most fortune 500 companies Scrum will be subverted by anyone who has ever seen "transparency" translate into someone being laid off.   2) If I had to do it all over again, would I change anything?  No, this is a natural progression, but the agile principles are powerful enough, that the companies that don't adopt them will no longer be competitive and will start to fail.   3) Is this situation solvable?  I think it is.  I think that one of the issues is that you often see companies implementing Scrum, but avoiding the agile engineering practices.  I believe that you cannot do one without the other.  Scrum keeps the ship sailing in smooth deep waters.  The agile engineering practices keep the engine running smoothly and cleanly.  If you implement agile engineering practices without Scrum, you run the risk of ending up with a great running piece of software that is useful to no one.  On the other hand, implementing cargo-cult Scrum without the agile engineering practices and you end up (especially in a fortune 500 company) being steered in the right direction, but with your development practices coming to a dead halt because you have code that can not keep up with the changes in requirements.   If you are trying to do Scrum, make sure that you hire some agile engineering coaches, or else you may find your deveolpment engines grinding to a dead halt in the middle of the open ocean.

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Rolling With the Punches

    - by D'Arcy Lussier
    So I’ve been tweeting the last little while “Rolling with the punches” and I’ve had some people ask me what that meant. Whether you’re running a conference (like I am this week), or a project, or a birthday party for a 2 year old, you need to be ready to handle those things that are unexpected. Risk mitigation can only go so far and its at those times that you need to become resourceful. So let me tell you what the last few days have been like. Today is the first day of Prairie Dev Con Winnipeg, a conference that I run. On Friday I was informed that my keynote speaker had lost his voice, one of my speakers had a family emergency and had to back out, and I got a warning from another that he was travelling over the weekend and if there was a storm or something he may not be able to get back by Monday for his talk. A storm didn’t happen, but their car did break down and he was delayed. Finally, Saturday night I took my printing order to Staples. It was at 5 and they closed at 6, and I had a bunch of surveys to be printed and cut. The girl working said that she’d have it ready by the next day (Sunday). Her intent was to come in the next morning and finish the job. Unfortunately, she had to be hospitalized that night and never made it into work…and never informed anyone of the remaining work. They found out at 3pm when I came to pick it up and there was no way they’d be able to cut everything in time. So how did we roll with these punches? - Miguel, my keynote speaker, was a trooper and was able to do the keynote but asked that his session get moved from Monday to Tuesday. This is why I wait until the last day before printing out schedules, they can change up to the event and even later. - I was able to move some sessions around to accommodate my stranded speaker and fill the empty slot from the speaker that couldn’t make it. - Staples was able to get me half the cut surveys so I took those and my wife will pick up the rest today. I altered how we’d collect session surveys, and actually I think it’ll work better. So all of this is to say, plan but also plan for what you can’t plan for – there will be things that happen that blindside you, that you’re not sure how to handle or solve. Stop, take a deep breath, and don’t feel that you need to limit yourself to the boundaries that you initially set for yourself. Roll with the punch and learn from it so that you can avoid the blow next time. Now, back to the conference! D

    Read the article

  • Recommended learning path?

    - by stairmast0r
    First, my current standing: I know C++ at an.. advanced beginner level? I've gone through a book, I know the syntax well enough, I know a fair amount of standard library functions, and I've programmed some simple console stuff with it. I'd probably be able to program more with it if I knew how to structure a program, but I just can't seem to wrap my head around the whole concept of structuring something remotely complex. I've messed around with Java for a day or two, and the syntax was extremely easy to get the hang of, except that I didn't really know any functions. I'm plenty willing to learn, and to work hard to do so, but I don't really know where to go from here. Now, at the risk of sounding cliche, what I'd like to become is someone like the great three of id; Carmack, Romero, and Abrash. To be considered a genius. I believe anything can be learned, and nothing mentally limits anyone except lack of desire to learn. But I don't know how to learn this. They learned by doing, and making do with what resources they had. On the other hand, I have access to almost any books I want, access to the internet, and access to a more than capable computer and software. Should I learn more languages? Assembly? LISP? BASIC? Haskell? Should I dive straight into advanced topics like OpenGL? Or should I wait until I feel I've come closer to mastering the simpler things, like console programs, first? Should I follow tutorials? Should I follow books? Should I just dive into writing something and follow a reference manual as I go? What order should I do all this in? How should I do it? I want to completely master this; to be considered a genius. The most perfect career I can imagine is to start the next id. I have the drive to do it, I just don't know where to begin...

    Read the article

  • Where can I find comprehensive documentation on the various aspects of Linux?

    - by Fsando
    Whenever a problem pops up in my use of Linux (full time user for 6 years) I start by googling. The simplest (or most common) issues will usually be cleared right away. If it's not in one of those two categories the advise I find tend to be wrong, misguided, obsolete, and if you ask a question here on "ask" you risk being "duplicated" to some superficially similar question. Some issues have haunted me for years until I suddenly hit on the actual developer's documentation (or equivalent) which in a few lines explains how to solve my issue, the correct and consistent way. Whenever that happens I'm always kicking myself: why didn't I just go here to begin with? And the obvious answer is: "I had no idea this was what I was looking for". And for the issues that this hasn't yet happened I'm banging my head: "This ought to be either straight forward or someone tell me it's not doable" So my question is: Are there projects out there trying to collect or list this documentation in a searchable/browseable way. I know there are many very good "if you want this do that" tutorials on Ubuntu but I'm looking for actual documentation. That either are or could be collected in one place (at least conceptually) so that search for information could start in one place. I'm fully aware this is a broad question but if you approach it as: Does gnome have a comprehensive documentation project - where do I find it? Does Ubuntu have a comprehensive documentation project - where do i find it? For example: how exactly does the mime-type association work in Ubuntu and in xubuntu? How exactly are menus created (in Ubuntu: quicklists, xubuntu/gnome: the main menu) How exactly does the rendering process work for compiz/x? (I'm having this issue where windows randomly stops updating until somehow forced to resume (I guess). So for instance where do I look for logs that may indicate the problem. How may I change randr or other settings that may influence this issue. So my point is to organize exact documentation or preferably to find projects that do this already. Thanks! If answers to this question get me started I'm hoping to collect such a list.

    Read the article

  • What is a Valid Trust Anchor in Windows 7 relating to Wifi?

    - by Aaron
    The error below just started happening at work with a personal laptop running Windows 7 Ultimate. I'm unable to use installed, non-expired certificates to connect to a private wireless network. No recent changes were made by IT that would explain the issue. It worked fine several weeks ago and happens on two laptops I own. The details and some screen shots are here: http://www.wiredprairie.us/blog/index.php/archives/906 The error we don't understand is this: The credentials provided by the server could not be validated. We recommend that you terminate the connection and contact your administrator with the information provided in the details. You may still connect but doing so exposes you to the security risk by a possible rogue server. The server XYZ presented a valid certificate issued by Company Name Certificate Authority but Company Name Certificate Authority is not configured as a valid trust anchor for this profile. We don't know to to resolve the issue without ignoring the error (nor what's changed that could explain this new error). Update: The new information is that we have our own Root CA, and that the certificates were not updated recently, nor have any expired.

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • untrusted (self-sign) certificate on android browser

    - by Basiclife
    Hi all, Apologies for the brevity of this question but due to an unfortunate series of events, I've managed to brick my PC so am posting from my phone... We've just set up Windows Small Business Server 2008 at work which has an external web portal accessible via HTTPS. We haven't yet bought?installed any certificates. The portal provides access to email, sharepoint, remote desktop, etc.... (I'm aware some of these are never going to work on the phone) From firefox / other desktop browsers, this displays an "untrusted cert' warning which I can choose to ignore. When browsing from my mobile I get a popup notification which says. "A secure connection could not be established" when I OK this (my only option) I see the standard android-generated "unable to load page - has it moved?" Page. Does anyone know of a way to either accept the certificate temporarily or allow untrusted certificates generally? I'm aware that the latter option is non-ideal in the mid to long term but at the moment, I need to access the portal and am willing to either toggle settings as/when required or forego using the mobile for banking, etc... to mitigate my risk. Thanks in advance for any help you can provide and apologies again for brevity In case it helps I'm on the G1 running android 1.6 using the default browser

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • Launching firefox on remote server causes local firefox to start instead

    - by terdon
    Right, this is strange. I am connecting from my laptop (LMDE) to a remote host (SUSE linux enterprise) using ssh -X. I want to launch a firefox instance running on the remote server so I can have access to webpages on a private network. User@RemoteMachine $ which -a firefox /usr/bin/firefox User@RemoteMachine $ /usr/bin/firefox --version Mozilla Firefox 2.0.0.2, Copyright (c) 1998 - 2007 mozilla.org User@LocalMachine $ which -a firefox /usr/bin/firefox User@LocalMachine $ /usr/bin/firefox --version Mozilla Firefox 14.0.1 Now, if firefox is not running on the local machine, everything goes as expected and executing firefox on the remote machine causes a firefox (v 2.0) window running on the remote machine to show up. However, if firefox is running on the local machine a second window of firefox 14.0.1 running on the local machine appears. I have checked top in both machines. In the 2nd case, a firefox process briefely appears on the remote machine and then disappears when the local version of firefox is launched. My questions are the following: What gives? How/why can firefox connect to its existing instance on the local machine? The remote machine appears to have access to the local machine. It, in fact, appears to have the right to execute programs on my local machine. Am I missing something or is this just weird? Is this not a security risk?

    Read the article

  • How do you host multiple public facing websites on a VPS?

    - by pedroarvy
    We host about 30 websites using typical shared hosting plans using ASP.NET and SQL 2000/2005/2008. I am now wondering about hosting all of these websites using our own virtual private server. This is clearly cheaper but comes with a lot of questions I need answers to: Is the risk of having to keep this VPS server up and running worth it? Until now, the host provider has managed the server and we have not had to worry about crashes, downtime, software patches etc. We are not server administrators, we are programmers, so this is not really our expertise. On the other hand, it may not be hard to learn. When we make a website live, we log in to a domain management control panel and change the primary and secondary name servers to point to our shared web host: Eg ns1.sharedwebhost.com and ns2.sharedwebhost.com These name servers are going to have to change when we have a VPS. I don’t understand anything about how to set this up. Is there some useful info anyone could direct me to? Or is there software we need to install to make the primary and secondary name servers work on our VPS? The control panel we have for shared hosting comes with DNS management like this: http://www.yart.com.au/stackoverflow/dns.png What software would I need to install to create this for each site we host at a VPS? The control panel we have for shared hosting also comes with a POP email interface that allows email addresses to be added easily by our customers. Is this something that can be easily set up at a VPS so clients can manage their own email addresses? Is there software we need to install to make this work?

    Read the article

  • Running SSL locally on a hosts redirected domain name with Ubuntu and Apache

    - by Matthew Brown
    I recently made some changes to my Ubuntu computer so that a domain name resolved to my local copy of Apache. I edited /etc/hosts and added 127.0.0.1 thisbit.example.com Then set up a VirtualHost for the responses I wishes to create. That all works fine and my testing is now shooting on ahead without harm or risk tot he production server. Now for my next trick I need to test the authentication and so need to do this with HTTPS Basically https://auth.example.com needs to work on my PC without the SSL causing an issue which I imagine would be the case as I am clearly not the true https://auth.example.com but for the basis of this exercise I need to pretend that I am. Now it might be that the Apps I'm testing don't worry about checking the certificate. (Many are in Java which I'm no expert with). What gotchas am I likely to encounter and what is the best way of not letting my own hacks spoil my testing? I'm guessing the place to start is to enable SSL with Apcahe... I've never done that before as it has never come up before.

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Configure Windows firewall to prevent an application from listening on a specific port

    - by U-D13
    The issue: there are many applications struggling to listen on port 80 (Skype, Teamviewer et al.), and to many of them that even is not essential (in the sense that you can have a httpd running and blocking the http port, and the other application won't even squeak about being unable to open the port). What makes things worse, some of the apps are... Well, I suppose, that it's okay that the mentally impaired are being integrated in the society by giving them a job to do, but... Programming requires some intellectual effort, in my humble opinion... What I mean is that there is no way to configure the app not to use specific ports (that's what you get for using proprietary software) - you can either add it to windows firewall exceptions (and succumb to undesired port opening behavior) or not (and risk losing most - if not all - of the functionality). Technically, it is not impossible for the firewall to deny an application opening an incoming port even if the application is in the exception list. And if this functionality is built into the Windows firewall somewhere, there should be a way to activate it. So, what I want to know is: whether there exists such an option, and if it does how to activate it.

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • cfengine3 file_copy only on source side change

    - by megamic
    I am using the 'digest' copy method for all file copy promises, because of the way we package and deploy software, I cant rely on mtime for the criteria for updating files. For various reasons, I am not employing the client-server approach with a central configuration server: rather we package and deploy our entire configuration module to each server, so from cf-engine's perspective, the source and target are local on the server it is running. The problem I am having with this approach is that the source will always update the target when they differ - which is what I want most of the time, usually because the source has been updated. However, like many other cfengine users, we are running an operational environment, where occasionally emergency fixes have to be applied immediately - meaning we don't have time to rebuild and redeploy a configuration module, and the fix will often be applied by deploying a tarball with specific changes. Of course this is problematic if cf-engine comes along 5 mintues later and reverts the changes. What we would like is to be able to make small, incremental changes to our servers, without them being reverted, until the next deployment cycle at which time the new source files would be copied. We do not consider random file corruption or mistaken changes to involve enough risk to warrant having cfengine constantly revert deployments to their source copy - the ability to deploy emergency fixes and have them stay that way until the next deployment would be of much greater value and utility. So, after all that, my question is this: is cf-engine capable of detecting whether it was the source or target that changed when the files differ, and if so, is their a way to use the 'digest' copy method but only if the source side changed? I am very open to other ideas and approaches as-well, as I am still quite new to this whole configuration management thing.

    Read the article

  • Hard drives (SATA/ATA) corrupting

    - by JC Denton
    Hello All, 2 years ago I bought relatives a new computer for christmas and installed Ubuntu on it. Ever since then it has been experiencing problems with the hard drives. The hard drive supplied with the machine was a SATA drive. When it appeared it was having problems (files and folders with invalid encoding started appearing) I replaced the SATA drive with the drive of their previous computer. I replaced it (The replacement) later on, the drive being rather old and thus becoming more prone to the risk of failure. The replacement drive is a IDE drive but the same problems started to appear (files and folders showing up in nautilus with invalid encoding). I fear the files and folders that are showing are existing FS entries, starting to corrupt. As it's happening to both the IDE and SATA drive it's unlikely to be the drives themselves or the IDE/SATA controller, I believe. Any ideas as to what could be causing the (assumed) corruption? EDIT: You're right about the paragraphs. They were there in edit mode but I'm still getting to grips with the whitespace format codes. The system is a "Primo Pro" AMD Phenom II X4 Quad Core 920 2.80GHz SILENT DDR2, ordered from overclockers.co.uk and nothing has been added to it except for the replacement of the SATA drive with an AMD drive. It would seem unlikely for a barebones system to be underpowered.

    Read the article

  • Replacing stock Core 2 Duo heatsink fan (just the fan really) with a Dell CPU fan

    - by user647345
    My old heatsink fan broke and I'm trying to reconnect its plugs to a new fan. My Dell CPU fan has some custom Dell plug. I snipped the old fan's wire in half and kept the plug on the end of it. I want to connect it to the Dell fan wire to the plug. The motherboard is a P5Q-e, the stock Core 2 Duo fan was .20A and the dell is .70A. Is that going to matter? The wire from the fan has four wires, the wire with the plug has four wires. They share three, of the four colors: red, black, and blue. Dell's fourth wire is white, while the plug's fourth wire is yellow. Is it safe to assume that I just connect the yellow and the white plug together and match the rest up? I don't want to take any risk of damaging anything. It runs fine passively without a fan, but I have speedstep on, so I would like to use this fan and just fasten it to the heatsink with some twist ties and paperclips and call it a day.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >