Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 388/854 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • IPSec VPN's being dropped by router and will not re-establish

    - by David Gard
    We have 3 sites, with our two remote sites connection to head office via LAN-to-LAN VPN's. All 3 sites use DrayTek 2900's with firware version v3.3.1.1_RC2 (this is a release candidate that DrayTek suggested I try, but sadly it made no difference). The only way to re-establish the VPN's once they have been dropped is to restart the router. Head office is set to dial out to both sites, with both the 'Always on' and 'Enable PING to keep alive' (pinging a server in the remote offices) options ticked. However, at random intervals the VPN's drop, logging IKE_RELEASE VPN : Dial-out Profile Index = 7, Name = Shepton (for one connection, and '6' & 'Wincanton' for the other connection). I first tried swapping the router with one at another site, and then had all three replaced, but that failed to solve the problem. Is anyone aware of anything that could cause the VPN's to drop randomly like this? Thanks.

    Read the article

  • New Release Overview Part 1

    - by brian.harrison
    Ladies & Gentlemen, I have been getting a lot of questions over the last month or two about the next release of WCI codenamed "Neo". Unfortunately I cannot give you an exact release date which I know you all would be asking me for if we were talking face to face, but I can definitely provide you with information about some of the features that will be made available. So over the next few blog entries, I am going to provide you with details about two features and even provide you with screenshots for some of them. KD Browser Portlet This portlet will provide a windows explorer look and feel to the Knowledge Directory from with a Community Page or My Page. Not only will the portlet provide access to the folder structure and the documents within, but the user or community manager will also have the ability to modify what is being shown. From with a preferences page, the user or community manager can change what top-level folders are shown within the folder structure as well as what properties are available for each document that is shown. There are also a number of other portlet specific customizations available as well. Embedded Tagging Engine As some of you might be aware, there was a product made available just prior to the Oracle acquisition known as Pathways which gave users the ability to add tags to documents that were either in the Knowledge Directory or in the Collaboration Documents section. Although this product is no longer available separately for customers to purchase, we definitely did feel that the functionality was important and interesting enough that other customers should have access to it. The decision was made for this release to embed the original Pathways product as the Tagging Engine for WCI and Collaboration. This tagging engine will allow a user to add tags to a document as well as through the Collaboration Documents section. Once the tags are added to the Tagging Engine and associated with documents, then a user will have the ability to filter the documents when processing a search according to the Tags Cloud that will now be available on the Search Results page and this will be true no matter what kind of search is being processed. In addition to all of that, all of the Pathways portlets will also be available for users to add to their My Page.

    Read the article

  • Mounting a Mail Store that is in a Recovery Storage group On Exchange 2003

    - by Kyle Brandt
    If I have a production server with the Mail Store Foo in both the storage group companyName and the Recovery Storage Group, is it okay to Mount Foo in the RSG while it is mounted in companyName so I can extract some mailboxs from the recovery storage group? Basically I am wonder if it is okay to mount it in both Production and the Recovery Storage Groups while the mail server is in production and the particular mail store is in production. Reference: "Once an RSG is restored into and mounted up you can connect to it with ExMerge and read out mailboxes into PST files for merging back into a 'live' store" -- http://serverfault.com/questions/49728/test-restore-of-exchange-dbs-with-the-ms-exchange-plugin-of-netbackup-6

    Read the article

  • Unable to suspend with FGLRX enabled

    - by schmmd
    I am unable to suspend in a fully updated (as of yesterday, 29 March 2011) Ubuntu 10.10 installation (kernel 2.6.35-28). Following is a list of some of my hardware: Motherboard: Gigabyte GA-X58A-UD3R Video: Radeon HD-567X-YNF3 Initially, when I went to either Suspend or Hibernate the machine would almost go into suspend, but it would never power down. Instead it would bounce back to the login screen. This was due to a problem with the USB3 ports being unable to suspend (noticed this in /var/log/kern.log). Disabling the USB3 ports in the BIOS fixed this issue. Now Suspend and Hibernate power down the machine. It successfully awakens from Hibernate. However, it will not return from suspend. The mouse and keyboard are not powered and the monitor has no signal. These devices are still not powered after a restart. I must power-cycle the machine. The pm-suspend log ends after it states that it entered suspend (i.e. there is no information any resume code running). I discovered that acpitool -s suspends the machine and resumes successfully exactly once. The second time the machine will not resume. I am not sure how these two tools handle suspend differently. UPDATE: the problem was introduced somewhere between 2.6.35-22 and 2.6.35-28. I have both kernels install presently. Suspend works fine with 2.6.35-22 but not with 2.6.38-28.

    Read the article

  • Alice In Wonderland: Good, but not Great

    - by Theo Moore
    We went to see Alice In Wonderland today. We both like Tim Burton a lot (the stranger the better) and like Johnny Depp very well also. After seeing all the previews and such, we were fired up to see this film. Honestly, I thought it was good but not great. I was prepared to be wow-ed, but I wasn't. Perhaps I expected too much. I did like it, but I'll not own it nor would I expect to see it again...unless someone I know decides they want to see it. I was about to say something to reassure you that I wasn't going to provide any spoilers but two things occurred to me: one, I never give spoilers and two, why worry about spoilers for a film that so closely follows a book? My comments about the film are hard to describe, but the basic gist is that it doesn't really feel like it..."works" to me. I can't get any more specific than that, much as I'd like to do so. Something about it seems sort of disjointed and not in that Alice way you'd expect. My only specific comment is that I didn't like the actor who plays Alice very well. She was very flat and just didn't sell he character to me. She seemed a bit, well, plastic. Depp was as good as you'd expect him to be, I am happy to say. Obviously Lewis Carroll couldn't have imagined this made into film, but I can't help thinking that he'd see this and say that Depp was the perfect Mad Hatter. So, I'd definitely recommend seeing it (we saw it in 3D which was cool, but not really necessary) at least once, but don't be surprised if you're kinda meh afterwards.

    Read the article

  • Redirect all access requests to a domain and subdomain(s) except from specific IP address? [closed]

    - by Christopher
    This is a self-answered question... After much wrangling I found the magic combination of mod_rewrite rules so I'm posting here. My scenario is that I have two domains - domain1.com and domain2.com - both of which are currently serving identical content (by way of a global 301 redirect from domain1 to domain2). Domain1 was then chosen to be repurposed to be a 'portal' domain - with a corporate CMS-based site leading off from the front page, and the existing 'retail' domain (domain2) left to serve the main web site. In addition, a staging subdomain was created on domain1 in order to prepare the new corporate site without impinging on the root domain's existing operation. I contemplated just rewriting all requests to domain2 and setting up the new corporate site 'behind the scenes' without using a staging domain, but I usually use subdomains when setting up new sites. Finally, I required access to the 'actual' contents of the domains and subdomains - i.e., to not be redirected like all other visitors - in order that I can develop the new site and test it in the staging environment on the live server, as I'm not using a separate development webserver in this case. I also have another test subdomain on domain1 which needed to be preserved. The way I eventually set it up was as follows: (10.2.2.1 would be my home WAN IP) .htaccess in root of domain1 RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} !^staging.domain1.com$ [NC] RewriteCond %{HTTP_HOST} !^staging2.domain1.com$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301] .htaccess in staging subdomain on domain1: RewriteEngine On RewriteCond %{REMOTE_ADDR} !^10\.2\.2\.1 RewriteCond %{HTTP_HOST} ^staging.revolver.coop$ [NC] RewriteRule ^(.*)$ http://domain2.com/$1 [R=301,L] The multiple .htaccess files and multiple rulesets require more processing overhead and longer iteration as the visitor is potentially redirected twice, however I find it to be a more granular method of control as I can selectively allow more than one IP address access to individual staging subdomain(s) without automatically granting them access to everything else. It also keeps the rulesets fairly simple and easy to read. (or re-interpret, because I'm always forgetting how I put rules together!) If anybody can suggest a more efficient way of merging all these rules and conditions into just one main ruleset in the root of domain1, please post! I'm always keen to learn, this post is more my attempt to preserve this information for those who are looking to redirect entire domains for all visitors except themselves (for design/testing purposes) and not just denying specific file access for maintenance mode (there are many good examples of simple mod_rewrite rules for 'maintenance mode' style operation easily findable via Google). You can also extend the IP address detection - firstly by using wildcards ^10\.2\.2\..*: the last octet's \..* denotes the usual "." and then "zero or more arbitrary characters", signified by the .* - so you can specify specific ranges of IPs in a subnet or entire subnets if you wish. You can also use square brackets: ^10\.2\.[1-255]\.[120-140]; ^10\.2\.[1-9]?[0-9]\.; ^10\.2\.1[0-1][0-9]\. etc. The third way, if you wish to specify multiple discrete IP addresses, is to bracket them in the style of ^(1.1.1.1|2.2.2.2|3.3.3.3)$, and you can of course use square brackets to substitute octets or single digits again. NB: if you're using individual RewriteCond lines to specify multiple IPs / ranges, make sure to put [OR] at the end of each one otherwise mod_rewrite will interpret as "if IP address matches 1.1.1.1 AND if IP address matches 2.2.2.2... which is of course impossible! However as far as I'm aware this isn't necessary if you're using the ! negator to specify "and is not...". Kudos also to SE: this older question also came in useful when I was verifying my own knowledge prior to my futzing around with code. This page was helpful, as were the various other links posted below (can't hyperlink them all due to spam protection... other regex checkers are available). The AddedBytes cheat sheet's useful to pin up on your wall. Other referenced URLs: internetofficer.com/seo-tool/regex-tester/ fantomaster.com/faarticles/rewritingurls.txt internetofficer.com/seo-tool/regex-tester/ addedbytes.com/cheat-sheets/mod_rewrite-cheat-sheet/

    Read the article

  • Oracle B2B 11g - Transport Layer Acknowledgement

    - by Nitesh Jain Oracle
    In Health Care Industry,Acknowledgement or Response should be sent back very fast. Once any message received, Acknowledgement should be sent back to TP. Oracle B2B provides a solution to send acknowledgement or Response from transport layer of mllp that is called as immediate acknowledgment. Immediate acknowledgment is generated and transmitted in the transport layer. It is an alternative to the functional acknowledgment, which generates after processing/validating the data in document layer. Oracle B2B provides four types of immediate acknowledgment: Default: Oracle B2B parses the incoming HL7 message and generates an acknowledgment from it. This mode uses the details from incoming payload and generate the acknowledgement based on incoming HL7 message control number, sender and application identification. By default, an Immediate ACK is a generic ACK. Trigger event can also sent back by using Map Trigger Event property. If mapping the MSH.10 of the ACK with the MSH.10 of the incoming business message is required, then enable the Map ACK Control ID property. Simple: B2B sends the predefined acknowledgment message to the sender without parsing the incoming message. Custom: Custom immediate Ack/Response mode gives a user to define their own response/acknowledgement. This is configurable using file in the Custom Immediate ACK File property. Negative: In this case, immediate ACK will be returned only in the case of exceptions.

    Read the article

  • Lock down a site using Forms Auth in IIS7 with Windows Auth

    - by Josh
    I have an ASP.NET MVC 1.0 application that uses Forms Authentication. We are using Windows Server 2008. I need to lock down the site so that only certain users (in AD Groups) can access the site. Unfortunately, though, when I set the site to not allow anon users and use windows authentication, due to the integration of the site and IIS, it shows the user as signed in as their domain account, instead of allowing them to sign in through Forms Auth. So, I need a mixed mode authentication. I need the site to be only accessible through windows auth, without anon users, but once you are in, it needs to use forms auth only. How would I go about doing this the right way?

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • Special 48-Hour Offer: Free ASP.NET MVC 3 Video Training

    - by ScottGu
    The Virtual ASP.NET MVC Conference (MVCConf) happened earlier today.  Several thousand developers attended the event online, and had the opportunity to watch 27 great talks presented by the community. All of the live presentations were recorded, and videos of them will be posted shortly so that everyone can watch them (for free).  I’ll do a blog post with links to them once they are available. Special Pluralsight Training Available for Next 48 Hours In my MVCConf keynote this morning, I also mentioned a special offer that Pluralsight (a great .NET training partner) is offering – which is the opportunity to watch their excellent ASP.NET MVC 3 Fundamentals course free of charge for the next 48 hours.  This training is 3 hours and 17 minutes long and covers the new features introduced with ASP.NET MVC 3 including: Razor, Unobtrusive JavaScript, Richer Validation, ViewBag, Output Caching, Global Action Filters, NuGet, Dependency Injection, and much more. Scott Allen is the presenter, and the format, video player, and cadence of the course is really great.  It provides an excellent way to quickly come up to speed with all of the new features introduced with the new ASP.NET MVC 3 release. Click here to watch the Pluralsight training - available free of charge for the next 48 hours (until Thursday at 9pm PST). Other Beginning ASP.NET MVC Tutorials We will be publishing a bunch of new ASP.NET MVC 3 content, training and samples on the http://asp.net/mvc web-site in the weeks ahead.  We’ll include content that is tailored to developers brand-new to ASP.NET MVC, as well as content for advanced ASP.NET MVC developers looking to get the most out of it. Below are two tutorials available today that provide nice introductory step-by-step ASP.NET MVC 3 tutorials: Build your First ASP.NET MVC 3 Application ASP.NET MVC Music Store Tutorial I recommend reviewing both of the above tutorials if you are looking to get started with ASP.NET MVC 3 and want to learn the core concepts and features behind it. Hope this helps, Scott

    Read the article

  • Google Apps email hosting for a GoDaddy-hosted site works locally but not on live site

    - by CrB
    GoDaddy email issues are plentiful, but I have not been able to find anyone resolve this same problem: I have a GoDaddy hosted site, and a Google Apps account. The MX info on GoDaddy is correct, as is my server-side code, and the Google Apps credentials in my web.config file (host:smtp.gmail.com, port:587) are correct. I know this because I am able to send emails through SmtpClient hosted my local machine's server when debugging the site. However, once transferred to the GoDaddy hosting account, all emails will not send -- they just time out. Nothing has changed aside from the site being run on the GoDaddy server as opposed to a local server. EDIT - SSL is enabled. A two part question: 1) Does anybody have any ideas about how to tackle this? 2) If not, is there another web hosting or email hosting site, or a combination of 2, that people can confirm is fast, actually works, and is not impossible to coordinate as is everything with GoDaddy? (I am aware that GoDaddy has their own relaying email server, but I initially used it before switching to Google and found emails coming in 30-60 minutes late).

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • Using CheckPoint SNX with RSA SecurID Software Token to connect to VPN

    - by Vinnie
    I have a fairly specific issue that I'm hoping someone else out in the community has had to tackle with success. My company uses CheckPoint VPN clients on Windows XP machines with RSA SecurID software to generate the tokens. The beauty is that once you generate a token code on the software, you can enter it into any machine trying to connect via VPN and with your username get connected. So, I've got Ubuntu 10.10 32bit on a tower and formerly on a laptop. Through several posts around the web, I was able to get SNX installed on the laptop, plug in my server connection information and be asked for a password only to have the connection fail. I used to debug mode and was able to see that the application was trying to and failing at writing a registry value, but I believe that to be a symptom of a different issue, even though I tried to find a way to remedy that. I'm wondering if anyone out there is on a similar configuration and was able to connect with SNX using an RSA token? If so, what steps did you take to setup and what problems/solutions did you encounter?

    Read the article

  • MacBook Pro + OSX Lion + Samsung SA550 HDMI not playing nicely

    - by rabbid
    I bought a Samsung SA550 monitor as a second monitor for my 2009 MacBook Pro that is running OSX 10.7 Lion. The monitor has 2 outputs, VGA and HDMI. If I connect using VGA everything is fine. If I connect using HDMI-DVI cable, and DVI-to-mini display converter I get static and flickering: The DVI-to-mini display converter that I use: I have used this converter with a different monitor that had a DVI output port a while ago, so at least it used to work once upon a time. I am not sure if this problem is because of that converter or not. I have never used a monitor with an HDMI output before. Appreciate your help. Thank you!

    Read the article

  • Is Agile the new micromanagement?

    - by Smith James
    This question has been cooking in my head for a while so I wanted to ask those who are following agile/scrum practices in their development environments. My company has finally ventured into incorporating agile practices and has started out with a team of 4 developers in an agile group on a trial basis. It has been 4 months with 3 iterations and they continue to do it without going fully agile for the rest of us. This is due to the fact that management's trust to meet business requirements with a quite a bit of ad hoc type request from high above. Recently, I talked to the developers who are part of this initiative; they tell me that it's not fun. They are not allowed to talk to other developers by their Scrum master and are not allowed to take any phone calls in the work area (which maybe fine to an extent). For example, if I want to talk to my friend for kicks who is in the agile team, I am not allowed without the approval of the Scrum master; who is sitting right next to the agile team. The idea of all this or the agile is to provide a complete vacuum for agile developers from any interruptions and to have them put in good 6+ productive hours. Well, guys, I am no agile guru but what I have read Yahoo agile rollout document and similar for other organizations, it gives me a feeling that agile is not cheap. It require resources and budget to instill agile into the teams and correct issue as they arrive to put them back on track. For starters, it requires training for developers and coaching for managers and etc, etc... The current Scrum master was a manager who took a couple days agile training class paid by the management is now leading this agile team. I have also heard in the meeting that agile manifesto doesn't dictate that agile is not set in stones and is customized differently for each company. Well, it all sounds good and reason. In conclusion, I always thought the agile was supposed to bring harmony in the development teams which results in happy developers. However, I am getting a very opposite feeling when talking to the developers in the agile team. They are unhappy that they cannot talk anything but work, sitting quietly all day just working, and they feel it's just another way for management to make them work more. Tell me please, if this is one of the examples of good practices used for the purpose of selfish advantage for more dollars? Or maybe, it's just us the developers like me and this agile team feels that they don't like to work in an environment where they only breathe work because they are at work. Thanks. Edit: It's a company in healthcare domain that has offices across US. It definitely feels like a cowboy style agile which makes me really not wanting to go for agile at all, esp at my current company. All of it has to do with the management being completely cheap. Cutting out expensive coffee for cheaper version, emphasis on savings and being productive while staying as lean as possible. My feeling is that someone in the management behind the door threw out this idea, that agile makes you produce more so we can show our bosses we're producing more with the same headcount. Or, maybe, it will allow us to reduce headcount if that's the case. EDITED: They are having their 5 min daily meeting. But not allowed to chat or talk with someone outside of their team. All focus is on work.

    Read the article

  • hosts file ignored, how to troubleshoot?

    - by Superbest
    The hosts file on Windows computers is used to bind certain name strings to specific IP addresses to override other name resolution methods. Often, one decides to change the hosts file, and discovers that the changes refuse to take effect, or that even old entries of the hosts file are ignored thereafter. A number of "gotcha" mistakes can cause this, and it can be frustrating to figure out which one. When faced with the problem of Windows ignoring a hosts file, what is a comprehensive troubleshoot protocol that may be followed? This question has duplicates on SO, such as hosts file seems to be ignored, HOSTS file being ignored, /etc/hosts file being ignored as well as numerous discussions elsewhere. However, these tend to deal with a specific case, and once whatever mistake the OP made is found out, the discussion is over. If you don't happen to have made the same error, such a discussion isn't very useful. So I thought it would be more helpful to have a general protocol for resolving all hosts-related issues that would cover all cases.

    Read the article

  • PXE boot FreeBSD iso from pxelinux server

    - by Andrew
    I'm using FOG as a TFTP / PXE server and would like to be able to boot a FreeBSD LiveCD (specifically pfSense, but it could be any LiveCD, really); I've found HOWTOs for booting a "netboot" BSD but they all seem to use a BSD server. So: Is it possible to PXE boot BSD from a Linux server? Is it possible to PXE boot a BSD LiveCD? Is it possible to PXE boot a Linux LiveCD? My main motivation is to be able to boot small LiveCD images (e.g. < 100MB) that I may only use once and don't want to burn a physical CD for.

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Formatting Keywords to UPPERCASE In Oracle SQL Developer

    - by thatjeffsmith
    I received this question from a customer today, and it took me more than a few minutes to remember where this preference was located in SQL Developer. This tells me that the topic is ripe for blogging How do I go FROM: select * from scott.emp where ename like '%JEFF%' TO SELECT * FROM scott.emp WHERE ename LIKE '%JEFF%' It’s all in the formatting You need to access the formatting preferences under the Tools menu. It takes a bit of navigating to get there, so bear with me: Tools Database SQL Formatter Oracle Formatting Click ‘Edit’ on the profile Other Case change: ‘Keywords Uppercase’ It’s easy to find once you know where to look? You can tell it to leave the case alone, upper everything, upper only the keywords, lower everything. Accessing the Formatter Options We allow separate formatting options for different RDBMS. You need to make sure you’re accessing the ‘Oracle Formatting’ page in the preferences. You can then choose to edit the default options OR you can do what I have done – save the defaults as a new set of options. I’ve called my profile ‘JeffCustom.’ I can now switch back and forth now through different sets of formatting options. You need to hit the ‘Edit’ button to get to the formatting options editor. A good number of people seem to miss this. Select your profile, then hit the ‘Edit’ button

    Read the article

  • What I&rsquo;m Reading &ndash; 2 &ndash; Microsoft Silverlight 4 Data and Services Cookbook

    - by Dave Campbell
    A while back I mentioned that I had a couple books on my desktop that I’ve been “shooting holes” in … in other words, reading pieces that are interesting at the time, or looking something up rather than starting at the front and heading for the back. The book I want to mention today is Microsoft Silverlight 4 Data and Services Cookbook : by Gill Cleeren and Kevin Dockx. As opposed to the authors of the last book I reviewed, I don’t personally know Gill or Kevin, but I’ve blogged a lot of their articles… both prolific and on-topic writers. The ‘recipe’ style of the book shouldn’t put you off. It’s more of the way the chapters are laid out than anything else and once you see one of them, you recognize the pattern. This is a great eBook to have around to open when you need to find something useful. As with the other PACKT book I talked about have the eBook because for technical material, at least lately, I’ve gravitated toward that. I can have it with me on a USB stick at work, or at home. Read the free chapter then check out their blogs. You may be surprised by some of the items you’ll find inside the covers. One such nugget is one I don’t think I’ve seen blogged:  “Converting You Existing Applications to Use Silverlight”. Another good job! Technorati Tags: Silverlight 4

    Read the article

  • Tough Decisions

    - by Johnm
    There was once a thriving business that employed two Database Administrators, Sam and Jim. Both DBAs were certified, educated and highly talented in their skill sets. During lunch breaks these two DBAs were often found together discussing best practices, troubleshooting techniques and the latest release notes for the upcoming version of SQL Server. They genuinely loved what they did. The maintenance of the first database was the responsibility of Sam. He was the architect of this server's setup and he was very meticulous in its configuration. He regularly monitored the health of the database, validated backup files and regularly adhered to the best practices that were advocated by well respected professionals. He was very proud of the fact that there was never a database that he managed that lost data or performed poorly. The maintenance of the second database was the responsibility of Jim. He too was the architect of this server's setup. At the time that he built this server, his understanding of the finer details of configuration were not as clear as they are today. The server was build on a shoestring budget and with very little time for testing and implementation. Jim often monitored the health of the database; but in more of a reactionary mode due to user complaints of slowness or failed transactions. Deadlocks abounded and the backup files were never validated. One day, the announcement was made that revealed that the business had hit financially hard times. Budgets were being cut, limitation on spending was implemented and the reduction in full-time staff was required. Since having two DBAs was regarded a luxury by many, this meant that either Sam or Jim were about to find themselves out of a job. Sam and Jim's boss, Frank, was faced with a very tough decision. Sam's performance was flawless. His techniques and practices were perfection. The databases he managed were reliable and efficient. His solutions are "by the book". When given a task it is certain that, while it may take a little longer, it will be done right the first time. Jim's techniques and practices were not perfect; but effective and responsive. He made mistakes regularly; but he shows that he learns from them and they often result in innovative solutions. When given a task it is certain that, while the results may require some tweaking, it will be done on time and under budget. You are Frank's best friend. He approaches you and presents this scenario. He must layoff one of his valued DBAs the very next morning. Frank asks you: "All else being equal, who would you let go? and Why?" Another pertinent question is raised: "Regardless of good times or bad, if you had to choose, which DBA would you want on your team when tough challenges arise?" Your response is. (This is where you enter a comment below)

    Read the article

  • The Steve Jobs Chronicles – Charlie and the Apple Factory [Video]

    - by Asian Angel
    Charlie and four other lucky children found the five golden tickets that Apple CEO Steve Jobs placed in random iPhone boxes. These tickets let the children have a once in a lifetime opportunity to explore the mysteries of the Apple Factory, but will they find out the true secrets of Apple’s success? Wait!! What is Bill Gates doing sneaking around the Apple Factory?! Charlie and the Apple Factory [via Geeks are Sexy] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Re-running SSRS subscription jobs that have failed

    - by Rob Farley
    Sometimes, an SSRS subscription for some reason. It can be annoying, particularly as the appropriate response can be hard to see immediately. There may be a long list of jobs that failed one morning if a Mail Server is down, and trying to work out a way of running each one again can be painful. It’s almost an argument for using shared schedules a lot, but the problem with this is that there are bound to be other things on that shared schedule that you wouldn’t want to be re-run. Luckily, there’s a table in the ReportServer database called dbo.Subscriptions, which is where LastStatus of the Subscription is stored. Having found the subscriptions that you’re interested in, finding the SQL Agent Jobs that correspond to them can be frustrating. Luckily, the jobstep command contains the subscriptionid, so it’s possible to look them up based on that. And of course, once the jobs have been found, they can be executed easily enough. In this example, I produce a list of the commands to run the jobs. I can copy the results out and execute them. select 'exec sp_start_job @job_name = ''' + cast(j.name as varchar(40)) + '''' from msdb.dbo.sysjobs j  join  msdb.dbo.sysjobsteps js on js.job_id = j.job_id join  [ReportServer].[dbo].[Subscriptions] s  on js.command like '%' + cast(s.subscriptionid as varchar(40)) + '%' where s.LastStatus like 'Failure sending mail%'; Another option could be to return the job step commands directly (js.command in this query), but my preference is to run the job that contains the step. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • powerpoint make computer shut down on dell optiplex 760

    - by yael
    hello, At my office we upgraded group of computers to windows 7 + office 2010, few of us have problem that when we work on powerpoint - once in a while the computer suddenly shut down (with out any message). Some of us - has no such problems. We checked and fount that the people who experience problems - use Dell Optiplex 760 PC, and everyone that have no problems use other models. We also found out that the processors of the 760's are not the same - some are Intel E7400 and one is Intel E8400, so I suspect that maybe the mother board is the problem Does any one no this problem? Does any one have an idea about it? Any help will be appreciated. Thanks, Yael

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >