Search Results

Search found 11960 results on 479 pages for 'auto implemented propert'.

Page 69/479 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Setting up shared connection

    - by Calvin Froedge
    I have a network that is connected to the internet via a switch connected to a router. I have it setup like this so I can work on the new network without causing problems on the old. Anyway, I'm trying to enable internet connection sharing. Internet comes to server like this: Modem - Router - Switch - Ubuntu 11.10 (Eth0) I want to share the connection through Eth1 (Eth1 - Managed Switch - Clients). Here is my config for /etc/network/interfaces: I have a DHCP server running on Eth1. Here is my config: ddns-update-style none; option domain-name "myserver.local"; option domain-name-servers 192.168.1.2, 8.8.8.8; default-lease-time 600; max-lease-time 7200; authoritative; subnet 192.168.1.0 netmask 255.255.255.0 { interface eth1; range 192.168.1.3 192.168.1.254; option routers 192.168.1.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; } Here is /etc/network/interfaces: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet dhcp #Used for internal network auto eth1 iface eth1 inet static address 192.168.1.2 netmask 255.255.255.0 broadcast 192.168.1.255 network 192.168.1.0 Here is /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myserver.isp.com server 192.168.1.2 server.myserver.local server myserver.local In /etc/sysctl.conf, I've set the following: net.ipv4.ip_forward=1 Finally, in /etc/rc.local, I've set the following: /sbin/iptables -P FORWARD ACCEPT /sbin/iptables --table nat -A POSTROUTING -o eth1 -j MASQUERADE When I ping 8.8.8.8 (google's DNS) from a client that is authenticated with my DHCP server (they have been assigned a local ip, like 192.168.1.10), I get a timeout. How can I debug this further to figure out where my problem is?

    Read the article

  • Center Page Content Horizontally using Div with CSS

    - by Aamir Hasan
    Center your website content to create equal sized Space from  the left and right using css. Horizontally centered by setting its right and left margin widths to "auto". This is the preferred way to accomplish horizontal centering with CSS. Create a warpper div which will hold your content div and then give it a margin:auto attribute which will bring your warpper div into center of page.<html><head><title>Center Page Content Horizontally and Vertically using Div with CSS </title> <style type="text/css">body{background-color:#eaeaea;}  #wrapper {width: 777px;margin:auto}  #content{background-color:#00FF00;min-height:400px;}  </style>  </head>  <body>  <form id="form1" runat="server">  <div id="wrapper"> <div id="wrapper">  <div id="content">  Welcome to Studentacad.com  </div>  </div>  </form>  </body></html>

    Read the article

  • mdadm: breaks boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

  • Have to run sudo dhclient eth0 automatically every boot

    - by Fyksen
    I just installed ubuntu 12.04.1 alternative install (for raid 0 on some disks). I Have some problems with the net. I'm at school, we use cable, and it got IPv6. If I run ifconfig eth0 heres my output: eth0 Link encap:Ethernet HWaddr e0:cb:4e:87:ff:db inet addr:128.39.194.217 Bcast:128.39.194.223 Mask:255.255.255.224 inet6 addr: 2001:700:1100:8008:e2cb:4eff:fe87:ffdb/64 Scope:Global inet6 addr: fe80::e2cb:4eff:fe87:ffdb/64 Scope:Link inet6 addr: 2001:700:1100:8008:48f7:c23:1d87:da6c/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1063378 errors:0 dropped:0 overruns:0 frame:0 TX packets:489811 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1577173461 (1.5 GB) TX bytes:37043669 (37.0 MB) Interrupt:68 Base address:0x6000 My /etc/network/interfaces look like this: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 # NetworkManager#iface eth0 inet dhcp # NetworkManager#hostname 2001:700:1100:1::4 # This is an autoconfigured IPv6 interface iface eth0 inet6 auto (I had to remove the hash tags, because of the BIGFONT i get on ask ubuntu) The "network manager" says that I'm not connected. Let me know if you need any more information. :)

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Is this project Structure Valid?

    - by rafuru
    I have a dilemma: In the university we learn to create modular software (on java), but this modularity is explained using a single project with packages (a package for business, another one for DAOS and another one for the model, oh and a last package for frontend). But in my work we use the next structure: I will try to explain: First we create a java library project where the model (entities classes) are created in a package. Next we create an EJB named DAOS and using the netbeans wizard we store the DAOS interfaces in the library project in another package , these interfaces are implemented in the DAOS bean. So the next part is the business logic, we create a business EJB for each group of functions , again using the wizard we store the interface in the java library project in another package then is implemented on the business bean. The final part (for the backend) is a bean that I have suggested: a Facade bean who will gather every method of the business beans in a single bean and this has an interface too that is created in our library project and implemented in the bean. So the next part is call the facade module on the web project. But I don't know how valid or viable is this, maybe I'm doing everything wrong and I don't even know! so I want to ask your opinion about this.

    Read the article

  • Should you create a class within a method?

    - by Amndeep7
    I have made a program using Java that is an implementation of this project: http://nifty.stanford.edu/2009/stone-random-art/sml/index.html. Essentially, you create a mathematical expression and, using the pixel coordinate as input, make a picture. After I initially implemented this in serial, I then implemented it in parallel due to the fact that if the picture size is too large or if the mathematical expression is too complex (especially considering the fact that I made the expression recursively), it takes a really long time. During this process, I realized that I needed two classes which implemented the Runnable interface as I had to put in parameters for the run method, which you aren't allowed to do directly. One of these classes ended up being a medium sized static inner class (not large enough to make an independent class file for it though). The other though, just needed a few parameters to determine some indexes and the size of the for loop that I was making run in parallel - here it is: class DataConversionRunnable implements Runnable { int jj, kk, w; DataConversionRunnable(int column, int matrix, int wid) { jj = column; kk = matrix; w = wid; } public void run() { for(int i = 0; i < w; i++) colorvals[kk][jj][i] = (int) ((raw[kk][jj][i] + 1.0) * 255 / 2.0); increaseCounter(); } } My question is should I make it a static inner class or can I just create it in a method? What is the general programming convention followed in this case?

    Read the article

  • How can I make an unmounted / unmountable NTFS disk not show up in the nautilus devices area?

    - by Dennis
    I have an idea that my /etc/fstab is a real mish-mash and I don't remember how it got that way, first of all it looks like this UUID=9EB80807B807DD21 /media/Storage ntfs-3g users 0 0 UUID=a60397fd-964a-45b1-ad35-53c8a4bee010 / ext4 defaults 0 1 UUID=1764825d-b8ba-4620-b3b0-e979b6f4f5c4 swap swap sw 0 0 UUID=255DA1E406E29DBC /media/sda2 ntfs-3g defaults 0 0 UUID=2CCCF161CCF1262C /mnt/sda1 ntfs-3g umask=000 0 0 /dev/fd0 /media/floppy0 vfat noauto 0 0 I started with an old XP install on disk /dev/sda that I don't use anymore but didn't want to delete, so I shrunk the XP partition, added a NTFS partition that would be common to both systems (Labeled it "Common" in XP), then installed Lucid on an extended ext4 partition. On this disk of course the ext4 system partition comes up as /, the go between partition auto-mounts on /media/sda1 but shows up in Nautilus as COMMOM, while the XP system disk does not show up in Nautilus, but I can get to it by navigating to /mnt/sda1. A second hard drive (/dev/sdb) that I stuck in was already formatted NTFS with a bunch of stuff and labeled "Storage". It auto-mounts to /media/Storage but another un-mounted disk also shows up in the Nautilus device area called Storage but it can't be mounted (Here and in the "Places" are the only times it appears) I would primarily like this non-existant (or already mounted depending on how you look at it) disk to not show up, but I wouldn't mind an explanation of why one labeled partition auto-mounts to a /media mount point but shows up by label, one does not show up as mounted at all but mounts to a /mnt mount point and is there for navigation, and one is mounted to a directory of the same name as the label. I would love to have some consistancy / direction on what is proper in this circumstance. No doubt I caused this with the fstab but I really don't remember what my rational was if I edited it manually

    Read the article

  • How can I make multiple displays work on my Asus UX32VD?

    - by oKtosiTe
    Original title: Why do I have two trash icons in the Unity Launcher? Whether I run Ubuntu as a live-USB or install it, I always have two trash bins on the Unity Launcher. Both work, and both open the same location. This seems a bit redundant; what could be done about it? Update: Turning auto-hide on made it obvious that I have multiple Launchers showing. With auto-hide off, they simply overlap, making it look like there's a double trash icon, but with auto-hide enabled, I can display one Launcher (and therefore one trash icon) at a time. Still, two are running simultaneously. Second update: This problem appears to be caused by the way Ubuntu handles multiple displays on my Asus UX32VD Ultrabook. Somehow, the laptop display cannot be used while my external display is connected. It is shown in the Displays list, but remains black no matter how I configure it. The external display runs at 1920x1200, the laptop monitor should run at 1920x1080. It therefore becomes obvious that the Launcher that's supposed to run on the laptop display, is actually displayed on the external monitor. Using nomodeset as a kernel parameter as indicated here makes the laptop display inaccessible altogether, detecting the external monitor as the laptop display and making resolutions other than 1920x1200 inaccessible. That is not an option.

    Read the article

  • Java JRE 7 Automatic Upgrade and Demantra Requirements - Action Required

    - by user702295
    The following applies to ALL Demantra, EBS and Demantra Oracle Integrations: All EBS desktop administrators must disable JRE Auto-Update for their end-users immediately. See this externally-published article:     URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users     https://blogs.oracle.com/stevenChan/entry/bulletin_disable_jre_auto_update Why is this required? If you have Auto-Update enabled, your JRE 1.6 version will be updated to JRE 7.     This may happen as early as July 3, 2012.     This will definitely happen after Sept. 7, 2012, after the release of 1.6.0_35 (6u35).  Oracle Forms is not compatible with JRE 7 yet.  JRE 7 has not been certified with Oracle E-Business Suite yet. Oracle E-Business Suite functionality based on Forms -- e.g. Financials -- will stop working if you upgrade to JRE 7. Related News Java 1.6.0_33 is certified with Oracle E-Business Suite.  See this externally-published article:     Java JRE 1.6.0_33 Certified with Oracle E-Business Suite     https://blogs.oracle.com/stevenChan/entry/jre_1_6_0_33

    Read the article

  • Python Web Applications: What is the way and the method to handle Registrations, Login-Logouts and Cookies? [on hold]

    - by Phil
    I am working on a simple Python web application for learning purposes. I have chosen a very minimalistic and simple framework. I have done a significant amount of research but I couldn't find a source clearly explaining what I need, which is as follows: I would like to learn more about: User registration User Log-ins User Log-outs User auto-logins I have successfully handled items 1 and 3 due to their simple nature. However, I am confused with item 2 (log-ins) and item 4 (auto-logins). When a user enters username and password, and after hashing with salts and matching it in the DB; What information should I store in the cookies in order to keep the user logged in during the session? Do I keep username+password but encrypt them? Both or just password? Do I keep username and a generated key matching their password? If I want the user to be able to auto-login (when they leave and come back to the web page), what information then is kept in the cookies? I don't want to use modules or libraries that handle these things automatically. I want to learn basics and why something is the way it is. I would also like to point out that I do not mind reading anything you might offer on the topic that explains hows and whys. Possibly with algorithm diagrams to show the process. Some information: I know about setting headers, cookies, encryption (up to some level, obviously not an expert!), request objects, SQLAlchemy etc. I don't want any data kept in a single web application server's store. I want multiple app-servers to be handle a user, and whatever needs to be kept on the server to be done with a Postgres/MySQL via SQLAlchemy (I think, this is called stateless?) Thank you.

    Read the article

  • protected abstract override Foo(); &ndash; er... what?

    - by Muljadi Budiman
    A couple of weeks back, a co-worker was pondering a situation he was facing.  He was looking at the following class hierarchy: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { } class FirstConcrete : SecondaryBase { } class SecondConcrete : SecondaryBase { } Basically, the first 2 classes are abstract classes, but the OriginalBase class has Test implemented as a virtual method.  What he needed was to force concrete class implementations to provide a proper body for the Test method, but he can’t do mark the method as abstract since it is already implemented in the OriginalBase class. One way to solve this is to hide the original implementation and then force further derived classes to properly implemented another method that will replace it.  The code will look like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected sealed override void Test() { Test2(); } protected abstract void Test2(); } class FirstConcrete : SecondaryBase { // Have to override Test2 here } class SecondConcrete : SecondaryBase { // Have to override Test2 here } With the above code, SecondaryBase class will seal the Test method so it can no longer be overridden.  Then it also made an abstract method Test2 available, which will force the concrete classes to override and provide the proper implementation.  Calling Test will properly call the proper Test2 implementation in each respective concrete classes. I was wondering if there’s a way to tell the compiler to treat the Test method in SecondaryBase as abstract, and apparently you can, by combining the abstract and override keywords.  The code looks like the following: abstract class OriginalBase { protected virtual void Test() { } } abstract class SecondaryBase : OriginalBase { protected abstract override void Test(); } class FirstConcrete : SecondaryBase { // Have to override Test here } class SecondConcrete : SecondaryBase { // Have to override Test here } The method signature makes it look a bit funky, because most people will treat the override keyword to mean you then need to provide the implementation as well, but the effect is exactly as we desired.  The concepts are still valid: you’re overriding the Test method from its original implementation in the OriginalBase class, but you don’t want to implement it, rather you want to classes that derive from SecondaryBase to provide the proper implementation, so you also make it as an abstract method. I don’t think I’ve ever seen this before in the wild, so it was pretty neat to find that the compiler does support this case.

    Read the article

  • Wine causes twin view to break

    - by deanvz
    I have endlessly been playing around with the Nvidia X Server settings and changing my xorg.conf file to try and work for me and on most days its fine. In each instance I get it working for a while and then this morning the most bizarre thing happens. The moment I open any type of Wine program (which never use to be a problem) my Twin View setup disappears and am left with mirrored displays. I try and change the settings in the Nvidia driver, but its not interested and the screens remain mirrored. I have a work around. Restart my pc... Below are the contents of my current xorg.conf file. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:43:34 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "LG Electronics W1934" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9400 GT" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1440+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • OSSEC HIDS Notification "Unknown problem somewhere in the system." (seems like hdd issue)

    - by John
    from what i understand somethings is wrong with hdd i am trying to find some commands in order to run some tests to check if hard disk is OK I will post a full list of logs after REBOOT of system: "Unknown problem somewhere in the system." kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:c8:38:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: ata2.00: error: { UNC } kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:78:88:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: sd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461400 on sda1) kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461672 on sda1) Also some of this logs are duplicate or even more. Thanks.

    Read the article

  • Cannot get script to run at startup (tried all the simple answers)

    - by Carey Head
    I have Ubuntu Desktop 12.04 LTS running great on an older Acer desktop. I want to use this machine as an in-home server for hosting Minecraft. The command to start the Minecraft server is java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui and that works great when I cd into the correct directory and execute the above. I created a script to do this: #!/bin/bash cd /home/myuser/minecraft-server1 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & cd /home/myuser/minecraft-server2 java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui & exit 0 I made this .sh file executable, and it too runs great when I start it manually from the terminal. The problem I'm having is getting these to execute at startup. I have my user account on this machine to auto login. I have tried the following: Adding the following to "Startup Applications" : sh /home/myuser/myscript.sh (Nothing happens on reboot) Adding the same to /etc/rc.local (Nothing happens on reboot). I even tested this one by running /etc/rc.local from the terminal, and it executed great. Just not at boot/auto login Added the lines from the script directly to rc.local (Nothing happens on reboot). I can't help but think that there's something I'm missing. The script executes great when run manually, but will not run at boot/auto login. Many thanks in advance.

    Read the article

  • xubutnu Nvidia Settings not remembered

    - by hozza
    I have an Nvidia card and I'm using NVIDIA Driver Version: 304.51 and the NVIDIA X Server Settings GUI. Everything works fine but when I reboot and login again both my two screens are set to +0 +0 so they mirror each other... I change the settings to screen 2 (NEC LED) to be left of screen 1 (DELL) click Apply and Save to X Configuration File... It all works but when I login again the settings are not remembered... This is my xorg config file, can anyone help out? # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "DELL 2005FPW" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 640" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: nvidia-auto-select +1280+0, DFP-2: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • noexec option enabled in fstab is not getting applicable for limited user. Is it a bug?

    - by user170918
    noexec option enabled in fstab is not getting applicable for limited user. Is it a bug? cat /etc/fstab # / was on /dev/sda2 during installation UUID=fd7e2645-3cc4-4c6c-8b1b-016711c2fd07 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=f3e58f86-8999-4678-a5ec-0a4b621c6e37 /boot ext4 defaults 0 2 # /home was on /dev/sda9 during installation UUID=bcbc1c4d-46a9-4b2a-bb0a-6fe1bdeaed22 /home ext4 defaults,nodev,nosuid 0 2 # /tmp was on /dev/sda5 during installation UUID=8538eecc-bd16-40fe-ad66-7d7b9287839e /tmp ext4 defaults,noexec,nosuid,nodev 0 2 # /var was on /dev/sda6 during installation UUID=292696cf-fc15-40ab-9cd8-cee9bff7e165 /var ext4 defaults,nosuid,nodev 0 2 # /var/log was on /dev/sda7 during installation UUID=fab1f85b-ae09-4ce0-b169-c01205eb8f9c /var/log ext4 defaults,noexec,nosuid,nodev 0 2 # /var/log/audit was on /dev/sda8 during installation UUID=602f5003-4ac0-49e9-99d3-b29378ce9430 /var/log/audit ext4 defaults,noexec,nosuid,nodev 0 2 # swap was on /dev/sda3 during installation UUID=a538d35b-b2e9-47f2-b72d-5dbbcf0afca0 none swap sw 0 0 /dev/sdb1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdc1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 /dev/sdd1 /mnt/usblpsc auto noauto,user,rw,noexec,nosuid,nodev 0 0 sudo users are not able to paste executable files in /bin into the file system which have the noexec option set. But limited users are able to paste the same files into the file system which have noexec option set. Why is it so?

    Read the article

  • Ubuntu 13.10. After login, no desktop displayed. Two Nvidia Graphics Cards, Four Monitors

    - by jmerkow
    I am working on an issue with my Ubuntu 13.10 installation. I am attempting to get 4 monitors up and running but I am having some trouble. So far, I installed and updated to the latest NVIDIA drivers (331.20). Initially X would not start (after installation) so I replaced my xorg.conf with xorg.conf.failsafe. This fixed that problem, but then I tried to enable the other 2 monitors (other video card) and xorg fails to start once again (after I login there is no desktop). I am fairly new to linux but I am not a complete beginner, but I'm not comfortable poking around too much on my own to troubleshoot yet.... lspci -nn | grep VGA: 03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 570 Rev. 2] [10de:1086] (rev a1) 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF110 [GeForce GTX 580] [10de:1080] (rev a1) It seems that the nvidia-settings tool does not result in a good xorg.conf file. Here it is: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 331.20 (buildmeister@swio-display-x86-rhel47-05) Wed Oct 30 18:20:32 PDT 2013 Section "ServerLayout" Identifier "Default Layout" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection ... Section "Monitor" Identifier "Configured Monitor" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "SHARP HDMI" HorizSync 15.0 - 68.0 VertRefresh 55.0 - 76.0 EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 0.0 - 0.0 VertRefresh 0.0 EndSection Section "Device" Identifier "Configured Video Device" Driver "vesa" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 570" BusID "PCI:3:0:0" EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 580" BusID "PCI:5:0:0" EndSection Section "Screen" Identifier "Default Screen" Device "Configured Video Device" Monitor "Configured Monitor" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "HDMI-0: nvidia-auto-select +640+0, DVI-I-3: nvidia-auto-select +0+1080" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DVI-I-2: nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Disable" EndSection

    Read the article

  • Jqgrid search option not working and edit popups not closed after submit

    - by Sajith
    i have facing two problem in Jqgrid. Search option not working and there is not closed editpopups after submit. my code below <table id="jQGridpending" style="width:auto"> </table> <div id="jQGridpendingPager"> </div> <table id="searchpending"></table> <div id="filterpending"></div> jQuery("#jQGridpending").jqGrid({ url: '@Url.Action("DiscountRequest", "Admin")', datatype: "json", mtype: "POST", colNames: [ "Id", "ClientName", "BpName", "Pdt", "DiscountReq", "DiscountAllowed", "Status", ], colModel: [ { name: "Id", width: 100, key: true, formatter: "integer", sorttype: "integer", hidden: true }, { name: "ClientName", width: 150, sortable: true,search:true,stype:'text', editrules: { required: false } }, { name: "BpName", width: 200, sortable: true, editable: false, editrules: { required: false } }, { name: "Pdt", width: 150, sortable: true, editable: false, editrules: { required: false } }, { name: "DiscountReq", width: 150, sortable: false, editable: false, editrules: { required: false } }, { name: "DiscountAllowed", width: 200, sortable: true, editable: true, editrules: { required: true } }, { name: 'Status', index: 'Status', width: 200, sortable: false, editable: true, formatter: 'select', edittype: 'select', editoptions: { value: "pending:pending;approved:approved;rejected:rejected" } }, @* { name: "Status", width: 200, sortable: false, editable: true, editrules: { required: true, minValue: 1, }, edittype: "select", editoptions: { async: false, dataUrl: "@Url.Action("GetStatus", "Admin")", buildSelect: function (response) { var s = "<select>"; s += '<option value="0">--Select--</option><option value="pending">pending</option>'; return s + "</select>"; } } },*@ //{ name: "Status", width: 150, sortable: true, editable: true, editrules: { required: true } }, //{ name: "Created", width: 120, formatter: "date", formatoptions: { srcformat: "ISO8601Long", newformat: "n/j/Y g:i:s A" }, align: "center", sorttype: "date" }, ], loadtext: "Processing pending request data please wait...", rowNum: 10, gridview: true, autoencode: true, loadonce: true, height: "auto", rownumbers: true, prmNames: { id: "Id" }, rowList: [10, 20, 30], pager: '#jQGridpendingPager', sortname: 'id', sortorder: "asc", viewrecords: true, jqModal: true, caption: "Pending List", reloadAfterSubmit: true, editurl: '@Url.Action("UpdateDiscount", "Admin")', }); jQuery("#jQGridpending").jqGrid('navGrid', '#jQGridpendingPager', { search: true,recreateFilter: true, add: false, searchtext: "Search", edittext: "Edit", deltext: "Delete", }, {//EDIT url: '@Url.Action("UpdateDiscount", "Admin")', width: "auto", jqModal: true, closeOnEscape: true, closeAfterEdit: true, reloadAfterSubmit: true, afterSubmit: function () { // Reload grid records after edit a entry in the db. $(this).jqGrid('setGridParam', { datatype: 'json' }); return [true, '', false]; }, }, {//DELETE url: '@Url.Action("DelDiscount", "Admin")', closeOnEscape: true }, {//SEARCH closeOnEscape: true, searchOnEnter: true, multipleSearch: true, //overlay: 0, width: "auto", height: "auto", });

    Read the article

  • Header Div Position Edit according to Content Div

    - by Melih Mucuk
    I want to design my web site. I have two main div, one of them "header", other "content". I want to set width property:80% and I set center for both of them. But I don't do that. Other question about header div. I uploaded an image and I want to design like this. How can I do all of them? My CSS: #main { margin:0px auto; margin-left:50px; margin-right:50px; margin-top:25px; margin-bottom:25px; height:auto; text-align:center;} #content { width:80%; height:auto; margin:0px auto; } #header { width:80%; height:100px; top:20px; } #header_left { float:left ; height: 100px; text-align: center; } #header_right { margin:0px auto; height:30px; text-align:center; margin-top:35px; } #header_social { float:right; height:30px; text-align:center; margin-top:35px;} My client code: <div id="main"> <div id="content"> <div id="header"> <div id="header_left"> <asp:Image ID="Image1" runat="server" /> </div> <div id="header_right"> <asp:LinkButton ID="LinkButton5" runat="server" ForeColor="Gray"></asp:LinkButton>&nbsp;| <asp:LinkButton ID="LinkButton4" runat="server" ForeColor="Gray" ></asp:LinkButton>&nbsp;| <asp:LinkButton ID="LinkButton3" runat="server" ForeColor="#FF3300" ></asp:LinkButton>&nbsp;| <asp:LinkButton ID="LinkButton2" runat="server" ForeColor="Gray" ></asp:LinkButton>&nbsp;| <asp:LinkButton ID="LinkButton1" runat="server" ForeColor="Gray" ></asp:LinkButton> </div> <div id="header_social"> <asp:ImageButton ID="ImageButton1" runat="server"/> <asp:ImageButton ID="ImageButton2" runat="server"/> </div> </div> ******CONTENT FIELD***** </div> </div>

    Read the article

  • How do I make a rectangle move in an image?

    - by alicedimarco
    Basically I have an image loaded, and when I click a portion of the image, a rectangle (with no fill) shows up. If I click another part of the image again, that rectangle will show up once more. With each click, the same rectangle should appear. So far I have this code, now I don't know how to make the image appear. My image from my file directory. I have already made the code to get the image from my file directory. import java.awt.Color; import java.awt.Graphics; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; public class MP2 extends JPanel implements MouseListener{ JFrame frame; JPanel panel; int x = 0; int y = 0; String input; public MP2(){ } public static void main(String[] args){ JFrame frame = new JFrame(); MP2 panel = new MP2(); panel.addMouseListener(panel); frame.add(panel); frame.setSize(200,200); frame.setVisible(true); } public void mouseClicked(MouseEvent event) { // TODO Auto-generated method stub this.x = event.getX(); this.y = event.getY(); this.repaint(); input = JOptionPane.showInputDialog("Something pops out"); System.out.println(input); } public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } public void mouseReleased(MouseEvent arg0) { // TODO Auto-generated method stub } public void paintComponent(Graphics g){ super.paintComponent(g); // this.setBackground(Color.white); *Sets the bg color of the panel g.setColor(new Color(255,0,0)); g.drawRect(x, y, 100, 100); } }

    Read the article

  • Why doesn't the .click() not work in this instance?

    - by Deshiknaves
    Basically what this does is fadeIn pre-existing divs and then loads an image. When the image is loaded it then appends that into one of the pre-existing divs. Then it appends a new div with an id of xButton. Then later it $('#xButton').click() should hide the 3 divs. However it just doesn't work. If I change the click() to either #modalImage or #overlay, it works but just not for the #xButton. I would really like to figure out why its not working and how I should be doing this instead. Thanks. $(function(){ $('#container a').click(function(e){ e.preventDefault(); var id = $(this).attr('href'), img = new Image(); $('#overlay').fadeIn(function(){ $('#modalImage').fadeIn(); }); $(img).load(function(){ $(this).hide(); $('#modalImage').removeClass('loader').append(this); $(this).fadeIn(function(){ var ih = $(this).outerHeight(), iw = $(this).outerWidth(), newH = ($(window).height()/10)*7, newW = ($(window).width()/10)*7; if (ih >= iw && ih >= newH) { $(this).css('height',newH + 'px'); $(this).css('width', 'auto'); } else if (ih >= iw && newH > ih) { $(this).css('height', ih + 'px'); $(this).css('width', 'auto'); } else if (iw > ih && iw >= newW) { if ((newW / (iw / ih)) > newH) { $(this).css('width', 'auto'); $(this).css('height', newH + 'px'); } else { $(this).css('width', newW + 'px'); $(this).css('height', 'auto'); } } else { $(this).css('width', iw + 'px'); $(this).css('height', 'auto'); } var padW = ($(window).width() - $(this).outerWidth()) / 2, padH = ($(window).height() - $(this).outerHeight()) / 2; console.log (padH + ' ' + padW); $(this).css('top', padH); $(this).css('left', padW); if($('#overlay').is(":visible") == true) { ih = ih + 4; $('<div id="xButton">x</div>').appendTo('#modalImage'); if (padH >= 0) { $('#xButton').css('top', padH - 4).css('right', padW - 65); } else { $('#xButton').css('top', '20px').css('right', padW - 65); } } }); }).attr('src', id); }); $('#xButton').click(function(){ $(this).hide(); $('#overlay').fadeOut(function(){ $('#modalImage img').css('top', '-9999px').remove(); $('#xButton').remove(); $('#modalImage').addClass('loader'); }); }); });

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • Unable to run Internet explorer 7 on Wine 1.2, ubuntu 8.04

    - by leva
    Following the instructions here: http://www.wine-reviews.net/wine-reviews/applications/ie-7-on-linux-with-wine.html I installed IE7. But when I run it with Wine 1.2 with: wine iexplore.exe& I get: Explorer$ fixme:system:SetProcessDPIAware stub! fixme:heap:HeapSetInformation (nil) 1 (nil) 0 fixme:advapi:RegisterTraceGuidsW (0x5b9f97, 0x6f4b08, {3e1fd72a-c323-4574-9917-5ce9c936f78c}, 1, 0x32f414, (null), (null), 0x6f4b10,) fixme:advapi:RegisterTraceGuidsW (0x5b9f97, 0x6f4b28, {afff9c82-5be3-4205-9b3e-49e014c09a63}, 1, 0x32f414, (null), (null), 0x6f4b30,) fixme:advapi:RegisterTraceGuidsW (0x6cd15f38, 0x6cd20180, {e2821408-c59d-418f-ad3f-aa4e792aeb79}, 1, 0x32f260, (null), (null), 0x6cd20188,) fixme:process:RegisterApplicationRestart (L"-restart /WERRESTART",0) err:ntdll:NtQueryInformationToken Unhandled Token Information class 18! fixme:ole:CoInitializeSecurity ((nil),-1,(nil),(nil),2,3,(nil),0,(nil)) - stub! fixme:advapi:RegisterTraceGuidsA (0x5e00187b, 0x5e0155f8, {1fb3f43f-4827-46e5-89e2-b398580357a3}, 1, 0x32da50, (null), (null), 0x5e015600,) fixme:advapi:RegisterTraceGuidsA (0x5e00187b, 0x5e015618, {7c0334a1-4635-4d95-8d76-9cf3171ac618}, 1, 0x32da50, (null), (null), 0x5e015620,) err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=0050069c fixme:msimtf:DllGetClassObject ({50d5107a-d278-4871-8989-f4ceaaf59cfc} {00000001-0000-0000-c000-000000000046} 0x32dfb4) err:ole:apartment_getclassobject DllGetClassObject returned error 0x80040111 err:ole:CoGetClassObject no class object {50d5107a-d278-4871-8989-f4ceaaf59cfc} could be created for context 0x401 fixme:urlmon:ZoneMgrImpl_GetIESecurityState (0x143f20)->(1, 0x32c4b4, (nil), 0) stub fixme:urlmon:SecManagerImpl_ProcessUrlAction Unsupported arguments fixme:shdocvw:IEParseDisplayNameWithBCW stub: 0x0 L"http://go.microsoft.com/fwlink/?LinkId=74005" 0x14d030 0x32d560 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032dd20 err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032db18 err:ole:CoGetClassObject class {807c1e6c-1d00-453f-b920-b61bb7cdd997} not registered err:ole:CoGetClassObject no class object {807c1e6c-1d00-453f-b920-b61bb7cdd997} could be created for context 0x1 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 fixme:urlmon:SecManagerImpl_ProcessUrlAction Unsupported arguments fixme:shdocvw:IEParseDisplayNameWithBCW stub: 0x0 L"http://go.microsoft.com/fwlink/?LinkId=74005" 0x131468 0x158d2f4 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032de7c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032dc74 fixme:urlmon:Uri_IsEqual (0x165ae8)->(0x165210 0x32c164) err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d6dc err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d4d4 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d6dc err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d4d4 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=004a796c fixme:toolbar:TOOLBAR_CheckStyle [0x10122] TBSTYLE_REGISTERDROP not implemented fixme:toolbar:TOOLBAR_CheckStyle [0x10122] TBSTYLE_REGISTERDROP not implemented fixme:toolbar:TOOLBAR_Unkwn45D hwnd=0x10122, wParam=0x00000000, size.cx=1280, size.cy=1020 stub! fixme:toolbar:TOOLBAR_CheckStyle [0x10122] TBSTYLE_REGISTERDROP not implemented fixme:wininet:InternetSetOptionW Option INTERNET_OPTION_RESET_URLCACHE_SESSION: STUB fixme:urlmon:Uri_GetScheme (0x1728a8)->(0x32e310) fixme:urlmon:Uri_GetScheme (0x18e400)->(0x32e310) fixme:shell:SignalFileOpen (0x00000000):stub. fixme:ole:NdrCorrelationInitialize (0x158e808, 0x158e408, 1024, 0x0): stub fixme:rpc:NdrStubCall2 new correlation description not implemented fixme:ole:NdrCorrelationFree (0x158e808): stub fixme:ole:NdrCorrelationInitialize (0x32d098, 0x32cc98, 1024, 0x0): stub fixme:rpc:NdrStubCall2 new correlation description not implemented fixme:ole:NdrCorrelationFree (0x32d098): stub err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d02c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032ce24 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d52c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d324 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 fixme:shdocvw:IEParseDisplayNameWithBCW stub: 0x0 L"http://google.ca/" 0x197e00 0x17fe9e4 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d48c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d284 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d52c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d324 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d4d4 err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d2cc err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d52c err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d324 err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp=005a2b88 err:comboex:COMBOEX_WindowProc unknown msg 200b wp=00000000 lp=0032d4d4 err:toolbar:ToolbarWindowProc unknown msg 200b wp=00000000 lp=0032d2cc err:rebar:REBAR_WindowProc unknown msg 200b wp=00000000 lp= And I am unable to open any webpages. How can I fix this?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >