Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 432/969 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • One-way forest trust between geographically distributed forests using Server 2008 R2

    - by bwerks
    Hi all, I'm planning out a joinder between two domains, as would take place with contracting companies. Forests A and B exist in distant sites, and there is to be a one-way forest trust so that domain users in Forest A can be authenticated on machines in Forest B. In order to facilitate this, each forest's domain controller must be able to contact each other in order to set up & confirm the trust, but my question is what underlying networking magic must take place beneath it. So far the prevailing approach has been to maintain a VPN connection between the two sites, but the technet documentation seems to indicate that DNS forwarding may be the way to go. Is this the case? Furthermore, if DNS will suffice, does that mean that there must be a server running DNS on boundary servers in each domain so that they can be reached from across the internet? How must they be configured? Thanks!

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

  • Ubuntu 12.04 does not see windows already install on my computer (dual installation)

    - by jacinta
    I was trying to install the ubuntu 12.4 along side windows 7 on my new HP Pavilion 64k desktop with windows 7 computer but Ubuntu said that ( This computer has no detected operating system) and some one said (I suggest you chkdsk your Windows partition. I also suggest you resize the NTFS in WIndows then install Ubuntu to the free space.) Therefore I did (To shrink a simple or spanned volume using the Windows interface In Disk Management, right-click the simple or spanned volume you want to shrink. Click Shrink Volume…. Follow the instructions on your screen.) Then When I try to install ubuntu 12.4 after doing this, I received the same error. I was going to undo what I did but I see that I lose 1g when I do that so now what do I do? it says I can do a new simple volume and maybe then the space will no longer be unallocated. Please help me. I think I have a bad cd (ubuntu 12.4) cause from my research I see that I am not suppose to get a screen saying that (The computer has no detected operating system) I think this is a bad cd and I hope I did not mess up my computer. Please help. .................................................................................... O k I think I am following what you said about how to edit my question irrational john. I did chkdsk as you and actionparsnip (andrew-woodhead666) told me to AND ALSO did a lot of other things before I found out how to chkdsk. No problems. Thank you. Then I put back the space (extended) I took from system. I still was only able to put back 15 and not 16 so it is up to 99mb not back to 100mb. Then I shrank HP (C) as you told me, to 10 13,240 mb which is (12.93gb Unallocated). I did not change it into NTSF by doing the (New Simple Volume Action) I just left it. Then I tried to install UBUNTU 12.04 live CD amd64 and it gave me the results it was sometimes giving me before which is result (THAT Ubuntu) does not tell me weather I have or have not an already installed windows7. It just goes to a window that would have showed me information on what I have and on the bottom (DEVICE FOR BOOT LOADER INSTALLATION /dev/sda ) and the option to go BACK, QUIT, or INSTALL. (I think it is the INSTALLATION TYPE window). Therefore I do what I have been doing and I QUIT. What do I do now? Sorry that it seems like I cannot do anything on my own. On the Youtube video how to install ubuntu dual-boot alongside windows UBUNTU is installed so easy. The installation option page gives 3 options including dual instillation and the disk even lets you use a slider to slide to the size of the partition size you want. Yet my UBUNTU live cd is a mess and I checked it as one of you guys told me and got back information that it is good. Oh well this guy says you should press a control key to tell which device you are using to install ubuntu before the screen comes up. I guess cause it is old. This page also shows you easy stuff that do not show up on my cd. how to dual-boot UBUNTU and windows 7 P.S.. I saw this on the windows 7 website windows.microsoft.com/en-US/windows7/Formatting-disks-and-drives-frequently-asked-questions CREATE A BOOT PARTITION I HAD TO LEAVE OUT THE HTTP STUFF CAUSE I AM ONLY ALLOWED 2 ON A PAGE IT SAID To create a boot partition Warning Warning If you are installing different versions of Windows, you must install the earliest version first. If you don't do this, your computer may become inoperable. Open Computer Management by clicking the Start button Picture of the Start button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking Computer Management.? Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation. In the left pane, under Storage, click Disk Management. Right-click an unallocated region on your hard disk, and then click New Simple Volume. In the New Simple Volume Wizard, click Next. Type the size of the volume you want to create in megabytes (MB) or accept the maximum default size, and then click Next. Accept the default drive letter or choose a different drive letter to identify the volume, and then click Next. In the Format Partition dialog box, do one of the following: If you don't want to format the volume right now, click Do not format this volume, and then click Next. To format the volume with the default settings, click Next. For more information about formatting, see Formatting disks and drives: frequently asked questions. Review your choices, and then click Finish. AND THIS ON ANOTHER PAGE. Formatting disks and drives: frequently asked questions Hard disks, the primary storage devices on your computer, need to be formatted before you can use them. When you format a disk, you configure it with a file system so that Windows can store information on the disk. Hard disks in new computers running Windows are already formatted. If you buy an additional hard disk to expand the storage of your computer, you might need to format it. Storage devices such as USB flash drives and flash memory cards usually come preformatted by the manufacturer, so you probably won't need to format them. CDs and DVDs, on the other hand, use different formats from hard disks and removable storage devices. For information about formatting CDs and DVDs, see Which CD or DVD format should I use? Warning Warning Formatting erases any existing files on a hard disk. If you format a hard disk that has files on it, the files will be deleted. WHAT I DID WAS I GOT TO COMPUTER MANAGEMENT SECTION THEN I CLICKED ON DRIVE HP(C) (it put stripes on to show it is selected) Then I click on ACTION selected ALL TASKS AND THEN selected SHRINK VOLUME and then chose how much space from what it was giving me that I wanted. (12.93gb) AND THAT WAS ALL I DID. THEN I TRIED TO INSTALL UBUNTU i NEVER GOT THE 3RD SCREEN THAT IS IN THE VIDEO I INCLUDED (THE YOUTUBE WITH THE ENGLISH GUY) INSTALLATION TYPE I ALSO DID NOT GET THE 4TH SCREEN THAT ALLOWS YOU TO SELECT PARTITION SIZE what i got next was the 2nd INSTILLATION TYPE window shown on the (LINUX BS DOS.COM) PAGE THAT I INCLUDED and it showed no information about any drives (no drives /partition or stuff was shown) only the Boot Loader statement and the dev/sda bar and that's why i did not press install but chose to QUIT. SORRY I JUST NOW SAW YOUR ANSWER IRRATIONAL JOHN. I SHRANK HP(C) BY 12.93GB MY UNALLOCATED SPACE IS NOW 12.93GB HP(C) = 907.17gb NTSF...YOU ARE CORRECT WITH EVERYTHING YOU SAID This is what i read on (http://)windows.microsoft.com/en-US/windows7/Create-a-boot-partition I am only allowed 2 links Create a boot partition You must be logged on as an administrator to perform these steps. A boot partition is a partition that contains the files for the Windows operating system. If you want to install a second operating system on your computer (called a dual-boot or multiboot configuration), you need to create another partition on the hard disk, and then install the additional operating system on the new partition. Your hard disk would then have one system partition and two boot partitions. (A system partition is the partition that contains the hardware-related files. These tell the computer where to look to start Windows.) To create a partition on a basic disk, there must be unallocated disk space on your hard disk. With Disk Management, you can create a maximum of three primary partitions on a hard disk. You can create extended partitions, which include logical drives within them, if you need more partitions on the disk. Picture of disk space in Computer ManagementUnallocated disk space If there is no unallocated space, you will either need to create space by shrinking or deleting an existing partition or by using a third-party partitioning tool to repartition your hard disk. For more information, see Can I repartition my hard disk? To create a boot partition Warning Warning If you are installing different versions of Windows, you must install the earliest version first. If you don't do this, your computer may become inoperable. Open Computer Management by clicking the Start button Picture of the Start button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking Computer Management.? Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation. In the left pane, under Storage, click Disk Management. Right-click an unallocated region on your hard disk, and then click New Simple Volume. In the New Simple Volume Wizard, click Next. Type the size of the volume you want to create in megabytes (MB) or accept the maximum default size, and then click Next. Accept the default drive letter or choose a different drive letter to identify the volume, and then click Next. In the Format Partition dialog box, do one of the following: If you don't want to format the volume right now, click Do not format this volume, and then click Next. To format the volume with the default settings, click Next. For more information about formatting, see Formatting disks and drives: frequently asked questions. Review your choices, and then click Finish. I did what you told me @irrational john and this is the screen shot. I ENTERED ubuntu@ubuntu:~$ sudo os-prober computer did not respond so I entered ubuntu@ubuntu:~$ sudo apt-get -y remove dmraid computer responded with Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: dmraid 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 141 kB disk space will be freed. (Reading database ... 147515 files and directories currently installed.) Removing dmraid ... update-initramfs is disabled since running on read-only media Processing triggers for man-db ... I entered ubuntu@ubuntu:~$ sudo os-prober Computer Responded with /dev/sda1:Windows 7 (loader):Windows:chain /dev/sda3:Windows Recovery Environment (loader):Windows1:chain ubuntu@ubuntu:~$ ............... @obsessiveFOSS I don't know what is a Grub menu and I do not know what is the Ubuntu boot option The answer you gave to me was correct. This one {This apparently removes the dmraid metadata. After doing that, you can use the desktop icon Install Ubuntu 12.04 LTS to start the Ubuntu installer. This time the Installation Type window should contain the option to Install Ubuntu alongside Windows 7.} This is what I decided to do. I did not see the rest of your help 'till now. Never the less. I think the best thing for me to do now is to get a cheap used laptop and either do a dual installation or just install Ubuntu on to it. This way if I have any issues that I cannot solve like the one I had here, at least I will still have a usable computer to work on and to use to get answers with because I am not an expert like the people on this forum. Thanks a lot I will try to keep learning and do research enough to some day help someone else.

    Read the article

  • What's Your Favorite Harmless Computer Practical Joke?

    - by Ben Griswold
    A couple of months ago, I returned from lunch to find that typing any key launched a random application. As I always lock my machine before walking away from it, my first thought was that a co-worker was playing a practical joke on me. As it turned out, the cause was random computer wonkiness. But just in case there's a need :), what's your favorite harmless computer practical joke? For example, would you alter host file entries to direct google.com to a random site or would you put tape over the optical sensor of a co-worker’s mouse?

    Read the article

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • How to ...set up new Java environment - largely interfaces...

    - by Chris Kimpton
    Hi, Looks like I need to setup a new Java environment for some interfaces we need to build. Say our system is X and we need to interfaces to systems A, B and C. Then we will be writing interfaces X-A, X-B, X-C. Our system has a bus within it, so the publishing on our side will be to the bus and the interface processes will be taking from the bus and mapping to the destination system. Its for a vendor based system - so most of the core code we can't touch. Currently thinking we will have several processes, one per interface we need to do. The question is how to structure things. Several of the APIs we need to work with are Java based. We could go EJB, but prefer to keep it simple, one process per interface, so that we can restart them individually. Similarly SOA seems overkill, although I am probably mixing my thoughts about implementations of it compared to the concepts behind it... Currently thinking that something Spring based is the way to go. In true, "leverage a new tech if possible"-style, I am thinking maybe we can shoe horn some jruby into this, perhaps to make the APIs more readable, perhaps event-machine-like and to make the interface code more business-friendly, perhaps even storing the mapping code in the DB, as ruby snippets that get mixed in... but thats an aside... So, any comments/thoughts on the Spring approach - anything more up-to-date/relevant these days. EDIT: Looking a JRuby further, I am tempted to write it fully in JRuby... in which case do we need any frameworks at all, perhaps some gems to make things clearer... Thanks in advance, Chris

    Read the article

  • Is it possible to redirect/bounce TCP traffic to external destination, base on rules?

    - by xfx
    I'm not even sure if this is possible... Also, please forgive my ignorance on the subject. What I'm looking for is for "something" that would allow me to redirect all TCP traffic arriving to host A to host B, but based on some rules. Say host A (the intermediary) receives a request (say a simple HTTP request) from a host with domain X. In that case, it lets it pass through and it's handled by host A itself. Now, let's suppose that host A receives another HTTP request from a host with domain Y, but this time, due to some customizable rules, host A redirects all the traffic to host B, and host B is able to handle it as if came directly from domain Y. And, at this point, both host B and the host with domain Y are able to freely communicate (of course, thought host A). NOTE: All these hosts are on the Internet, not inside a LAN. Please, let me know if the explanation is not clear enough.

    Read the article

  • SQL Azure Security: DoS

    - by Herve Roggero
    Since I decided to understand in more depth how SQL Azure works I started to dig into its performance characteristics. So I decided to write an application that allows me to put SQL Azure to the test and compare results with a local SQL Server database. One of the options I added is the ability to issue the same command on multiple threads to get certain performance metrics. That's when I stumbled on an interesting security feature of SQL Azure: its Denial of Service (DoS) detection engine. What this security feature does is that it performs a check on the number of connections being established, and if the rate of connection is too high, SQL Azure blocks all communication from that machine. I am still trying to learn more about this specific feature, but it appears that going to the SQL Azure portal and testing the connection from the portal "resets" the feature and you are allowed to connect again... until you reach the login threashold. In the specific test I was performing, all the logins were successful. I haven't tried to login with an invalid account or password... that will be for next time. On my Linked In group (SQL Server and SQL Azure Security: http://www.linkedin.com/groups?gid=2569994&trk=hb_side_g) Chip Andrews (www.sqlsecurity.com) pointed out that this feature in itself could present an internal threat. In theory, a rogue application could be issuing many login requests from a NATed network, which could potentially prevent any production system from connecting to SQL Azure within the same network. My initial response was that this could indeed be the case. However, while the TCP protocol contains the latest NATed IP address of a machine (which masks the origin of the machine making the SQL request), the TDS protocol itself contains the IP Address of the machine making the initial request; so technically there would be a way for SQL Azure to block only the internal IP address making the rogue requests.  So this warrants further investigation... stay tuned...

    Read the article

  • Best wireless mouse/keyboard for conference room computer?

    - by Brett
    What would be the best wireless mouse and keyboard for a conference room computer that is used by multiple employees throughout the day? We have one in there right now that is really cheap that doesn't work half the time. This is due to the fact that the batteries run down when left on... and it seems to have problems losing its pairing with the computer dongle. Any ideas on something that won't have battery problems, is very reliable and somewhat tough? Price isn't too much of an issue, but I'd still prefer to get something for less than $100 just in case someone walks away with it.

    Read the article

  • sysprep failure on Windows Server 2008

    - by dushyantp
    Before deploying a Azure VM Role, we need to perform %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown But in my case the sysprep fails with the log file %windir%\system32\sysprep\Panther\setuperr.txt saying: 2012-07-05 08:03:57, Error [0x0f0073] SYSPRP RunExternalDlls:Not running DLLs; either the machine is in an invalid state or we couldn't update the recorded state, dwRet = 31 2012-07-05 08:03:57, Error [0x0f00ae] SYSPRP WinMain:Hit failure while processing sysprep cleanup external providers; hr = 0x8007001f I do not always want to create a new image. Is there any work around? I followed the instructions in MS support here and tried: %windir%\system32\sysprep\sysprep.exe /generalize /oobe /shutdown /unattend:.\unattend.xml It did not work. Under certain circumstances, I need to tear down the VM Image from azure and re-deploy with some more changes. So sysprep has to run almost twice every week.

    Read the article

  • Apache + SuExec + php-fpm - how to set them up?

    - by FractalizeR
    Hello. I wonder if there is a good guide on how to setup Apache + SuExec + php-fpm? I have a server which I am going to use several separate website. So, I need php to be run as site-owner user. As I can see, php-fpm is a little different from php-fcgi. Is there a need in mod_fcgid from Apache in this case? How to set this all up? For now my site is running Apache + mod_suphp + php-cgi, so... it's good, but a little slow. I want to preserve security and gain an ability to use APC.

    Read the article

  • How to Export Flash Animation Data

    - by charliep
    I'd love for my partner, the artist, to be able to animate using flash movieclips and timelines. Then I, the programmer, would like to read the raw Flash info and re-program it into my engine of choice (which happens to be Torque2D). The data I'd want is the bitmap images that were used in Flash, like the head and body the links between the images, like where the head connects to the body the motion data from the flash animation, like move, rotate (at what speed), shear, etc. for the head or arms or whatever. Is there any way to get this data? Here's what I know so far. There are tools like SWFSheet and Spriteloq that convert the entire flash animation into a frame by frame sprite animation (in a sprite sheet). This would take too much space in my case, so I'd like to avoid that. Re-animating on the fly would take much less texture memory. There is a PDF that describes the SWF file format but NOT the individual components like the movieclips. So anyone know of a library I can use, or how I can learn more about the movieclip components and whatnot? (more better tags: transform, export, convert)

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • How to get a Sun Ray to load a firmware from elsewhere

    - by vdiozguy
    I run a Sun Ray/VDI demo environment internally within the company - and because it's not a public service, I need to tell my Sun Rays to connect to it directly so that I don't get redirected to the corporate servers. To get any new Sun Ray to connect to *my* setup I usually pull out my laptop so that the Sun Ray can load the new version of the F/W along with the permission to pull up the management GUI via STOP-S.But there is a better way if you have another Sun Ray server handy:1) allow your Sun Ray to connect to the default corporate server2) log in to a "regular" session, that is a Solaris or Linux desktop on the Sun Ray server itself3) in a terminal, utswitch to your server (/opt/SUNWut/bin/utswitch -h myserver)4) again, login to a regular session there5) in a terminal,  issue "/opt/SUNWut/lib/utload -S myserver -w"6) Watch your firmware load and wait7) the Sun Ray will reboot and connect to the first server again. Repeat steps 2-48) issue "/opt/SUNWut/lib/utload -S myserver -f SunRay.enableGUI"9) Press STOP-S and be merryNOTE: I'm sure there is even yet a better way - this is totally unsupported, most likely a figment of my imagination. In any case, this post will self-destruct in BOOM.

    Read the article

  • Which Message Queue should I choose (must run on Linux)

    - by MHS
    There are many open source Message queues for Linux, and I need some help deciding what I should go for. My problem is simple - I get sent a list of files that needs to be processed. Each job can't be split up, but they are self contained and can be spread to multiple computers. I'm thinking of solving this using a message queue. Multiple clients send a message to a central queue. Each queue has a number of subscribers that will take jobs from that queue when they have finished processing the current job. Ideally it should have the following qualities Message queue must be able to store unprocessed messages in case of a shutdown/reboot A job can only be processed by a single subscriber (don't want duplicate jobs) The subscribers should be able to send jobs of their own, that will be processed by a different set of subscribers. Can anyone suggest a simple to use message queue?

    Read the article

  • Why a static main method in Java and C#, rather than a constructor?

    - by Konrad Rudolph
    Why did (notably) Java and C# decide to have a static method as their entry point – rather than representing an application instance by an instance of an Application class, with the entry point being an appropriate constructor which, at least to me, seems more natural? I’m interested in a definitive answer from a primary or secondary source, not mere speculations. This has been asked before. Unfortunately, the existing answers are merely begging the question. In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point1. Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd, which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback (Startup) which may intercept this.

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Combo/Input LOV displaying non-reference key value

    - by [email protected]
    Its a very common use-case of LOV that we want to diplay a non key value in the LOV but store the key value in the DB. I had to do the same in a sample application I was building. During implementation of this, I realized that there are multiple ways to achieve this.I am going to describe each of these below.Example : Lets take an example of our classic HR schema. I have 2 tables Employee and Department where Dno is the foreign key attribute in Employee that references Department table.I want to create a LOV for Deparment such that the List always displays Dname instead of Dno. However when I update it, it it should update the reference key Dno.To achieve this I had 3 alternative1) Approach 1 :Create a composite VO and add the attributes from Department into Employee using a join.Refer the blog http://andrejusb.blogspot.com/2009/11/defining-lov-on-reference-attribute-in.htmlPositives :1. Easy to implement and use.2. We can use this attribute directly in queries defined on new attribute i.e If i have to display this inside query panel.Negative : We have to create an additional Join on the VO.Ex:SELECT Employees.EMPLOYEE_ID,        Employees.FIRST_NAME,        Employees.LAST_NAME,        Employees.EMAIL,        Employees.PHONE_NUMBER,       Department.Dno,        Department.DnameFROM EMPLOYEES Employees, Department Department WHERE Employees.Dno = Department .Dno2) Approach 2 :

    Read the article

  • What are the consequences of immutable classes with references to mutable classes?

    - by glenviewjeff
    I've recently begun adopting the best practice of designing my classes to be immutable per Effective Java [Bloch2008]. I have a series of interrelated questions about degrees of mutability and their consequences. I have run into situations where a (Java) class I implemented is only "internally immutable" because it uses references to other mutable classes. In this case, the class under development appears from the external environment to have state. Do any of the benefits (see below) of immutable classes hold true even by only "internally immutable" classes? Is there an accepted term for the aforementioned "internal mutability"? Wikipedia's immutable object page uses the unsourced term "deep immutability" to describe an object whose references are also immutable. Is the distinction between mutability and side-effect-ness/state important? Josh Bloch lists the following benefits of immutable classes: are simple to construct, test, and use are automatically thread-safe and have no synchronization issues do not need a copy constructor do not need an implementation of clone allow hashCode to use lazy initialization, and to cache its return value do not need to be copied defensively when used as a field make good Map keys and Set elements (these objects must not change state while in the collection) have their class invariant established once upon construction, and it never needs to be checked again always have "failure atomicity" (a term used by Joshua Bloch) : if an immutable object throws an exception, it's never left in an undesirable or indeterminate state

    Read the article

  • Remote deploying wars to a liferay installation

    - by iftrue
    With vanilla tomcat, you can POST to URLs beneath SOMURL/manager/ with a proper manager user role defined. The liferay deployment of tomcat, however, is missing the manager and host-manager applications, and when I copy the directories from a vanilla Tomcat installation, I get the exception below: Exception: javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) root cause java.lang.SecurityException: Servlet of class org.apache.catalina.manager.HTMLManagerServlet is privileged and cannot be loaded by this web application org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:636) What's the proper way to remote deploy wars to a liferay instance? (Not portlets, in my case.)

    Read the article

  • pam_exec.so PAM module does not export variable PAM_USER as stated in the documentation

    - by davidparks21
    I'm trying to use the pam_exec.so PAM module to execute a script which needs to know the username/password coming from the application (OpenVPN in this case). I have a script that executes printenv >>afile, but I don't see all the environment variables that the man pages states that pam_exec.so exports (namely PAM_USER I think), I only see the following: PAM_SERVICE=openvpn PAM_TYPE=auth PWD=/usr/local/openvpn/bin SHLVL=1 A__z="*SHLVL I do successfully pick up the password off of STDIN and output it with this same script. But for the life of me I can't get the username. Any thoughts on what I should try next?

    Read the article

  • Multiple CAS servers with Microsoft Exchange and selective authorization

    - by John Wilcox
    I have a Microsoft Exchange 2010 organization within one Microsoft Windows domain and I have users accessing it through OWA. For simplicity lets say I currently have one CAS server (CAS 1) which is accessible only through a VPN connection. Lets call the users connecting to the first CAS group a. For some users though, I need to install another CAS server (CAS 2) so that they can connect without using a VPN connection. Lets call those users group b. What I need to achieve is that group a can only log in to CAS 1 and group b can only log in to CAS 2. Now I know that one can disable/enable OWA per user but in my case that is not enough because OWA must be enabled for both groups.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >