Search Results

Search found 22653 results on 907 pages for 'case insensitive'.

Page 370/907 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • Is it possible to redirect/bounce TCP traffic to external destination, base on rules?

    - by xfx
    I'm not even sure if this is possible... Also, please forgive my ignorance on the subject. What I'm looking for is for "something" that would allow me to redirect all TCP traffic arriving to host A to host B, but based on some rules. Say host A (the intermediary) receives a request (say a simple HTTP request) from a host with domain X. In that case, it lets it pass through and it's handled by host A itself. Now, let's suppose that host A receives another HTTP request from a host with domain Y, but this time, due to some customizable rules, host A redirects all the traffic to host B, and host B is able to handle it as if came directly from domain Y. And, at this point, both host B and the host with domain Y are able to freely communicate (of course, thought host A). NOTE: All these hosts are on the Internet, not inside a LAN. Please, let me know if the explanation is not clear enough.

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • Best wireless mouse/keyboard for conference room computer?

    - by Brett
    What would be the best wireless mouse and keyboard for a conference room computer that is used by multiple employees throughout the day? We have one in there right now that is really cheap that doesn't work half the time. This is due to the fact that the batteries run down when left on... and it seems to have problems losing its pairing with the computer dongle. Any ideas on something that won't have battery problems, is very reliable and somewhat tough? Price isn't too much of an issue, but I'd still prefer to get something for less than $100 just in case someone walks away with it.

    Read the article

  • sysprep failure on Windows Server 2008

    - by dushyantp
    Before deploying a Azure VM Role, we need to perform %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown But in my case the sysprep fails with the log file %windir%\system32\sysprep\Panther\setuperr.txt saying: 2012-07-05 08:03:57, Error [0x0f0073] SYSPRP RunExternalDlls:Not running DLLs; either the machine is in an invalid state or we couldn't update the recorded state, dwRet = 31 2012-07-05 08:03:57, Error [0x0f00ae] SYSPRP WinMain:Hit failure while processing sysprep cleanup external providers; hr = 0x8007001f I do not always want to create a new image. Is there any work around? I followed the instructions in MS support here and tried: %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:.\unattend.xml It did not work. Under certain circumstances, I need to tear down the VM Image from azure and re-deploy with some more changes. So sysprep has to run almost twice every week.

    Read the article

  • Apache + SuExec + php-fpm - how to set them up?

    - by FractalizeR
    Hello. I wonder if there is a good guide on how to setup Apache + SuExec + php-fpm? I have a server which I am going to use several separate website. So, I need php to be run as site-owner user. As I can see, php-fpm is a little different from php-fcgi. Is there a need in mod_fcgid from Apache in this case? How to set this all up? For now my site is running Apache + mod_suphp + php-cgi, so... it's good, but a little slow. I want to preserve security and gain an ability to use APC.

    Read the article

  • Why a static main method in Java and C#, rather than a constructor?

    - by Konrad Rudolph
    Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural? I’m interested in a definitive answer from a primary or secondary source, not mere speculations. This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1. Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback (Startup) which may intercept this.

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • How to get a Sun Ray to load a firmware from elsewhere

    - by vdiozguy
    I run a Sun Ray/VDI demo environment internally within the company - and because it's not a public service, I need to tell my Sun Rays to connect to it directly so that I don't get redirected to the corporate servers. To get any new Sun Ray to connect to *my* setup I usually pull out my laptop so that the Sun Ray can load the new version of the F/W along with the permission to pull up the management GUI via STOP-S.But there is a better way if you have another Sun Ray server handy:1) allow your Sun Ray to connect to the default corporate server2) log in to a "regular" session, that is a Solaris or Linux desktop on the Sun Ray server itself3) in a terminal, utswitch to your server (/opt/SUNWut/bin/utswitch -h myserver)4) again, login to a regular session there5) in a terminal,  issue "/opt/SUNWut/lib/utload -S myserver -w"6) Watch your firmware load and wait7) the Sun Ray will reboot and connect to the first server again. Repeat steps 2-48) issue "/opt/SUNWut/lib/utload -S myserver -f SunRay.enableGUI"9) Press STOP-S and be merryNOTE: I'm sure there is even yet a better way - this is totally unsupported, most likely a figment of my imagination. In any case, this post will self-destruct in BOOM.

    Read the article

  • Which Message Queue should I choose (must run on Linux)

    - by MHS
    There are many open source Message queues for Linux, and I need some help deciding what I should go for. My problem is simple - I get sent a list of files that needs to be processed. Each job can't be split up, but they are self contained and can be spread to multiple computers. I'm thinking of solving this using a message queue. Multiple clients send a message to a central queue. Each queue has a number of subscribers that will take jobs from that queue when they have finished processing the current job. Ideally it should have the following qualities Message queue must be able to store unprocessed messages in case of a shutdown/reboot A job can only be processed by a single subscriber (don't want duplicate jobs) The subscribers should be able to send jobs of their own, that will be processed by a different set of subscribers. Can anyone suggest a simple to use message queue?

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • Multiple CAS servers with Microsoft Exchange and selective authorization

    - by John Wilcox
    I have a Microsoft Exchange 2010 organization within one Microsoft Windows domain and I have users accessing it through OWA. For simplicity lets say I currently have one CAS server (CAS 1) which is accessible only through a VPN connection. Lets call the users connecting to the first CAS group a. For some users though, I need to install another CAS server (CAS 2) so that they can connect without using a VPN connection. Lets call those users group b. What I need to achieve is that group a can only log in to CAS 1 and group b can only log in to CAS 2. Now I know that one can disable/enable OWA per user but in my case that is not enough because OWA must be enabled for both groups.

    Read the article

  • Issuse with multiple student result printing request

    - by dotman14
    I'm not really sure about how to ask this question but i'll try my best to make it clear enough. I have a Student Result Application where by students result are managed, over several academic sessions, all having two semesters each. During each semester students take different courses and have results based on the semester. The application is done now and i'm using a PDF Libaray to crop the final result page to hand over to the students each semester. If a student request a particular semester result it's a straight forward issue and there are no complications when it comes to printing out the result. My issue is this: in a case where a student requests to have a combination of semesters...say 3rd year rain semester , 4th year rain semester and 5th year hamattarn semester. Please how can i handle this issue? Does the user picks these options at the user interface level or there's a special way to handle issues like this? Also, if i'm to display these multiple student result how could this be be done, knowing fully well that i'll have to print the different result seperately. Hopefully i've being able to make my situation clear enough. Thanks for your time and patience. Expecting your comments and answers. Thanks.

    Read the article

  • What would interfere with changing the default application for opening files on OS X?

    - by Michael Prescott
    I know how to change the default application for opening files. I select a file in Finder, right-click and select Get Info from the context menu. In the file's Info window, expand the Open with: panel. In that panel is a combo box that says, in my case, Adobe Flash CS4. I click the combo box and select Flash Player. It changes. Then I click the Change All... button. A dialog pops up and says: Are you sure you want to change all similar documents to open with the application “Adobe Flash CS4”? This change will apply to all documents with extension “.swf”. Cancel/Continue Well, clicking Continue does exactly what the message says. It sets the Finder to open all swf files with Adobe Flash CS4. What is going on? Why doesn't the message say, .... "Flash Player"

    Read the article

  • Change Gnome panel profile according to number of displays

    - by ifischer
    I'm running Ubuntu 10.04 on a Laptop. I have a startup script which enables external displays, if they are connected. It runs at GDM startup, configured in /etc/gdm/Init/Default. When i'm running without external displays, Gnome should use 2 Panels. When i'm using 2 external displays, Gnome should add an additional Panel to the second display. But this should of course be removed again if i detach the external displays (and restart). Can i configure this use case by using Gnome Panel-profiles? I read that there is a startup option "--profile" for gnome-panel, but i don't know how and where i could switch the profile, especially because this has to be done after recognizing the number of displays. Or can i add a general Gnome profile and switch between those profiles somehow to achieve this behavior?

    Read the article

  • Hot fix published for TFS2010 upgrade issues

    - by jehan
    Microsoft has released a hot fix for the issues that are identified after the migration of TFS2005/TFS2008 servers to TFS2010. The issues are related to Merging and Labels: ·         Labels that were created before the upgrade are entirely empty.  Labels could be also have incorrect contents. ·         The merge wizard in Visual Studio does not display all valid merge targets for a given source path/branch. ·         During merging, merge candidates are shown for changes that were already merged prior to the upgrade. If you have not yet upgraded to TFS 2010, the hotfix is now available and is highly recommended to be applied before configuring your team project collections. Because this hotfix applies to the upgrade of version control content, it must be applied after TFS 2010 setup is complete, but before configuration is started.  At the end of the setup experience, the Success screen is shown indicating the completion of the installation.  Normally, users will continue on to the configuration part, but in this case, the user need to cancel the configuration part by un-checking the “Launch Team Foundation Server Configuration Tool” box, which will enable the Cancel button. After exiting setup, the hotfix executable can be run to update the upgrade steps. Once the hotfix is installed, the TFS Configuration Wizard will need to be re-launched from the Start Menu to complete the upgrade process.    The hotfix has been published on MSDN Code Gallery – you can find it here: http://code.msdn.microsoft.com/KB2135068   If you have upgraded to TFS2010 and facing any of the above issues, then checkout this KB for Resolution: http://support.microsoft.com/kb/2193796/en-us

    Read the article

  • Can my PowerMac G3 B&W really take a harddrive larger than 128GB?

    - by Josh Calvetti
    So it's a well known fact that the PowerMacs manufactured before 2002 cannot take a harddrive larger than 128GB. I have an old B&W that was running 10.4, and upon putting a 250GB drive inside, it told me that I had inserted a 128GB. That was expected. However, I recently decided to turn that machine into a Debian home file server. I shoved the 250GB drive inside, did some formatting, and now it tells me that it is a 250GB drive. Is this safe to use? Will all my data go corrupt after I've added more than 128GB of stuff? In case the specs are helpful to have, it's a 400MHz B&W, 1GB RAM, Rev. B.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >