Search Results

Search found 17801 results on 713 pages for 'missing dll'.

Page 148/713 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • How to Pin Any File to the Start Screen in Windows 8

    - by Taylor Gibb
    By default Windows 8 only allows you to pin a few file types to the Start Screen. Read on to find out how you can change that by editing the registry. How to Pin Any File to the Start Screen in Windows 8 Press the Win + R keyboard combination to open a run box, then type notepad and press enter. When notepad opens, paste the following into the new document: Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\*\shell\pintostartscreen] “MUIVerb”=”@shell32.dll,-51201″ “NeverDefault”=”" “Description”=”@shell32.dll,-51202″ “MultiSelectModel”=”Single” [HKEY_CLASSES_ROOT\*\shell\pintostartscreen\command] “DelegateExecute”=”{470C0EBD-5D73-4d58-9CED-E91E22E23282}” Then click on the File menu item and select save as… Before you go any further, change the Save as type to All Files. Then give your file a name ending in .reg and click Save. PinToStartHack.reg To use the hack, just double click on the .reg file you just created. When you are prompted about whether you want to continue, click Yes. Now you can pin any file to the Start Screen. Undo the Change If you ever wish to undo the change, press the Win + R keyboard combination and type regedit, then press enter. Then drill down into: HKEY_CLASSES_ROOT\*\shell\ Finally delete the pintostartscreen key. That’s all there is to it. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Finding Buried Controls

    - by Bunch
    This post is pretty specific to an issue I had but still has some ideas that could be applied in other scenarios. The problem I had was updating a few buttons so their Text values could be set in the code behind which had a method to grab the proper value from an external source. This was so that if the application needed to be installed by a customer using a language other than English or needed a different notation for the button's Text they could simply update the database. Most of the time this was no big deal. However I had one instance where the button was part of a control, the button had no set ID and that control was only found in a dll. So there was no markup to edit for the Button. Also updating the dll was not an option so I had to make the best of what I had to work with. In the cs file for the aspx file with the control on it I added the Page_LoadComplete. The problem button was within a GridView so I added a foreach to go through each GridViewRow and find the button I needed. Since I did not have an ID to work with besides a random ctl00$main$DllControl$gvStuff$ctl03$ctl05 using the GridView's FindControl was out. I ended up looping through each GridViewRow, then if a RowState equaled Edit loop through the Cells, each control in the Cell and check each control to see if it held a Panel that contained the button. If the control was a Panel I could then loop through the controls in the Panel, find the Button that had text of "Update" (that was the hard coded part) and change it using the method to return the proper value from the database. if (rowState.Contains("Edit")){  foreach (DataControlFieldCell rowCell in gvr.Cells)  {   foreach (Control ctrl in rowCell.Controls)   {    if (ctrl.GetType() == typeof(Panel))     {     foreach (Control childCtrl in ctrl.Controls)     {      if (childCtrl.GetType() == typeof(Button))      {       Button update = (Button)childCtrl;       if (update.Text == "Update")       {        update.Text = method to return the external value for the button's text;       }      }     }    }   }  }} Tags: ASP.Net, CSharp

    Read the article

  • Problems loading Hilva tutorials

    - by Beska
    I'm a newcomer to XNA, and I'm evaluating some libraries. The Hilva Graphics Engine looks interesting, and I'm trying to run their tutorials. However, all of them give me errors. For example, if I download the ParallaxMappingSample demo, and try to build it, I get Error 1 Error loading pipeline assembly "C:\Users\Me\Desktop\ParallaxMappingSample\Hilva.Content.dll". ParallaxMappingSample I get similar errors for all of the samples. Unfortunately, this error isn't very enlightening. I can see the Hilva.Content.dll in the appropriate directory. I tried removing and readding the reference from the content project, but I get the same error. I'm not sure it's relevant, but I'm on Windows 7, I'm using Microsoft Visual Studio 2010, and XNA 4.0. Is there an easy (or difficult) solution? EDIT: If you happen to try this, even if you don't have a solution, let me know about it in a comment. Whether it works for you, or if you get the same problem...either result would be something that might let me know if it's just a problem with the tutorial, or if it's on my end.

    Read the article

  • Structure of a .NET Assembly

    - by Om Talsania
    Assembly is the smallest unit of deployment in .NET Framework.When you compile your C# code, it will get converted into a managed module. A managed module is a standard EXE or DLL. This managed module will have the IL (Microsoft Intermediate Language) code and the metadata. Apart from this it will also have header information.The following table describes parts of a managed module.PartDescriptionPE HeaderPE32 Header for 32-bit PE32+ Header for 64-bit This is a standard Windows PE header which indicates the type of the file, i.e. whether it is an EXE or a DLL. It also contains the timestamp of the file creation date and time. It also contains some other fields which might be needed for an unmanaged PE (Portable Executable), but not important for a managed one. For managed PE, the next header i.e. CLR header is more importantCLR HeaderContains the version of the CLR required, some flags, token of the entry point method (Main), size and location of the metadata, resources, strong name, etc.MetadataThere can be many metadata tables. They can be categorized into 2 major categories.1. Tables that describe the types and members defined in your code2. Tables that describe the types and members referenced by your codeIL CodeMSIL representation of the C# code. At runtime, the CLR converts it into native instructions

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • What is the steps to make a frame work for a company? [on hold]

    - by bbb
    we want to make a frame work for our company. Our company mission is developing web applications and CMS and websites. Till now we had a lot of problems with the various types of codding. we didnt have a frame work and every programmer codes as he wants and it was too hard for the others to edit them. Now we want to make a frame work for the company. We want to make an archive of dll files that are written by our self our other and make the programmers to use just from them and we want to make a frame work for the type of codding. WE NEED A STRUCTURE FOR THE COMPANY. I dont know how to do this and what is the first and second and third step to do this. I need some guidance about it. For example I say that the frame work should contains the followings: The base should be SOLID The method should be Code-First The standards should be Naming Convention The type should be 3 layer programming The method should be MVC We should use from our dll archive The UI should be with HTML and CSS And using from Bootstrap Am I right or not or is it complete or...???

    Read the article

  • Marshalling to a native library in C#

    - by Daniel Baulig
    I'm having trouble calling functions of a native library from within managed C# code. I am developing for the 3.5 compact framework (Windows Mobile 6.x) just in case this would make any difference. I am working with the waveIn* functions from coredll.dll (these are in winmm.dll in regular Windows I believe). This is what I came up with: // namespace winmm; class winmm [StructLayout(LayoutKind.Sequential)] public struct WAVEFORMAT { public ushort wFormatTag; public ushort nChannels; public uint nSamplesPerSec; public uint nAvgBytesPerSec; public ushort nBlockAlign; public ushort wBitsPerSample; public ushort cbSize; } [StructLayout(LayoutKind.Sequential)] public struct WAVEHDR { public IntPtr lpData; public uint dwBufferLength; public uint dwBytesRecorded; public IntPtr dwUser; public uint dwFlags; public uint dwLoops; public IntPtr lpNext; public IntPtr reserved; } public delegate void AudioRecordingDelegate(IntPtr deviceHandle, uint message, IntPtr instance, ref WAVEHDR wavehdr, IntPtr reserved2); [DllImport("coredll.dll")] public static extern int waveInAddBuffer(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint cWaveHdrSize); [DllImport("coredll.dll")] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WAVEHDR lpWaveHdr, uint Size); [DllImport("coredll.dll")] public static extern int waveInStart(IntPtr hWaveIn); // some other class private WinMM.WinMM.AudioRecordingDelegate waveIn; private IntPtr handle; private uint bufferLength; private void setupBuffer() { byte[] buffer = new byte[bufferLength]; GCHandle bufferPin = GCHandle.Alloc(buffer, GCHandleType.Pinned); WinMM.WinMM.WAVEHDR hdr = new WinMM.WinMM.WAVEHDR(); hdr.lpData = bufferPin.AddrOfPinnedObject(); hdr.dwBufferLength = this.bufferLength; hdr.dwFlags = 0; int i = WinMM.WinMM.waveInPrepareHeader(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInPrepare"; return; } i = WinMM.WinMM.waveInAddBuffer(this.handle, ref hdr, Convert.ToUInt32(Marshal.SizeOf(hdr))); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInAddrBuffer"; return; } } private void setupWaveIn() { WinMM.WinMM.WAVEFORMAT format = new WinMM.WinMM.WAVEFORMAT(); format.wFormatTag = WinMM.WinMM.WAVE_FORMAT_PCM; format.nChannels = 1; format.nSamplesPerSec = 8000; format.wBitsPerSample = 8; format.nBlockAlign = Convert.ToUInt16(format.nChannels * format.wBitsPerSample); format.nAvgBytesPerSec = format.nSamplesPerSec * format.nBlockAlign; this.bufferLength = format.nAvgBytesPerSec; format.cbSize = 0; int i = WinMM.WinMM.waveInOpen(out this.handle, WinMM.WinMM.WAVE_MAPPER, ref format, Marshal.GetFunctionPointerForDelegate(waveIn), 0, WinMM.WinMM.CALLBACK_FUNCTION); if (i != WinMM.WinMM.MMSYSERR_NOERROR) { this.Text = "Error: waveInOpen"; return; } setupBuffer(); WinMM.WinMM.waveInStart(this.handle); } I read alot about marshalling the last few days, nevertheless I do not get this code working. When my callback function is called (waveIn) when the buffer is full, the hdr structure passed back in wavehdr is obviously corrupted. Here is an examlpe of how the structure looks like at that point: - wavehdr {WinMM.WinMM.WAVEHDR} WinMM.WinMM.WAVEHDR dwBufferLength 0x19904c00 uint dwBytesRecorded 0x0000fa00 uint dwFlags 0x00000003 uint dwLoops 0x1990f6a4 uint + dwUser 0x00000000 System.IntPtr + lpData 0x00000000 System.IntPtr + lpNext 0x00000000 System.IntPtr + reserved 0x7c07c9a0 System.IntPtr This obiously is not what I expected to get passed. I am clearly concerned about the order of the fields in the view. I do not know if Visual Studio .NET cares about actual memory order when displaying the record in the "local"-view, but they are obviously not displayed in the order I speciefied in the struct. Then theres no data pointer and the bufferLength field is far to high. Interestingly the bytesRecorded field is exactly 64000 - bufferLength and bytesRecorded I'd expect both to be 64000 though. I do not know what exactly is going wrong, maybe someone can help me out on this. I'm an absolute noob to managed code programming and marshalling so please don't be too harsh to me for all the stupid things I've propably done. Oh here's the C code definition for WAVEHDR which I found here, I believe I might have done something wrong in the C# struct definition: /* wave data block header */ typedef struct wavehdr_tag { LPSTR lpData; /* pointer to locked data buffer */ DWORD dwBufferLength; /* length of data buffer */ DWORD dwBytesRecorded; /* used for input only */ DWORD_PTR dwUser; /* for client's use */ DWORD dwFlags; /* assorted flags (see defines) */ DWORD dwLoops; /* loop control counter */ struct wavehdr_tag FAR *lpNext; /* reserved for driver */ DWORD_PTR reserved; /* reserved for driver */ } WAVEHDR, *PWAVEHDR, NEAR *NPWAVEHDR, FAR *LPWAVEHDR; If you are used to work with all those low level tools like pointer-arithmetic, casts, etc starting writing managed code is a pain in the ass. It's like trying to learn how to swim with your hands tied on your back. Some things I tried (to no effect): .NET compact framework does not seem to support the Pack = 2^x directive in [StructLayout]. I tried [StructLayout(LayoutKind.Explicit)] and used 4 bytes and 8 bytes alignment. 4 bytes alignmentgave me the same result as the above code and 8 bytes alignment only made things worse - but that's what I expected. Interestingly if I move the code from setupBuffer into the setupWaveIn and do not declare the GCHandle in the context of the class but in a local context of setupWaveIn the struct returned by the callback function does not seem to be corrupted. I am not sure however why this is the case and how I can use this knowledge to fix my code. I'd really appreciate any good links on marshalling, calling unmanaged code from C#, etc. Then I'd be very happy if someone could point out my mistakes. What am I doing wrong? Why do I not get what I'd expect.

    Read the article

  • Wifi not working after a few minutes

    - by drtanz
    I'm using a few MacBooks and iPads connected to a router via WiFi. The problem is that a few minutes after they connect via WiFi the connection stops working. This happens on all devices. I went into the router settings by connecting via cable and everything seems in order. Connecting a laptop via cable to the router I can use internet as normal, the problem is only with WiFi. What can be the problem here? Here are the connected clients Connected Clients MAC Address Idle(s) RSSI(dBm) IP Addr Host Name Mode Speed (kbps) 14:10:9F:F3:48:D6 1 -36 192.168.0.5 Jeans-Air n 78000 14:99:E2:C6:41:10 1 -36 192.168.0.8 JeanGaleasiPad n 24000 Here's the router event log Mon Dec 30 04:12:30 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:N/A)... Mon Dec 30 04:12:25 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:5) -... Mon Dec 30 02:17:56 2013 Notice (6) WiFi Interface [wl0] set to Channel 40 (Side-Band Channel:36)... Mon Dec 30 02:16:04 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:7) ... Mon Dec 30 01:59:26 2013 Notice (6) WiFi Interface [wl0] set to Channel 6 (Side-Band Channel:N/A)... Mon Dec 30 01:59:22 2013 Notice (6) WiFi Interface [wl0] set to Channel 6 (Side-Band Channel:2) -... Sun Dec 29 23:27:51 2013 Notice (6) WiFi Interface [wl0] set to Channel 1 (Side-Band Channel:N/A)... Sun Dec 29 23:27:49 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:N/A... Sun Dec 29 14:32:55 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Sat Dec 28 13:08:19 2013 Error (4) DHCP REBIND WARNING - Field invalid in response ;CM-MAC=1c:3e... Fri Dec 27 18:10:19 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Fri Dec 27 16:08:55 2013 Error (4) Map Request Retry Timeout;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Thu Dec 26 21:08:53 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:7) ... Thu Dec 26 20:43:50 2013 Notice (6) WiFi Interface [wl0] set to Channel 11 (Side-Band Channel:N/A... Tue Dec 24 12:45:03 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Tue Dec 24 04:55:52 2013 Error (4) Map Request Retry Timeout;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:32:00 2013 Notice (6) TLV-11 - unrecognized OID;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:32:00 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.9;CM-MAC=1c:3e:... Mon Dec 23 12:32:00 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.8;CM-MAC=1c:3e:... Mon Dec 23 12:32:00 2013 Warning (5) DHCP WARNING - Non-critical field invalid in response ;CM-MAC... Mon Dec 23 18:32:02 2013 Notice (6) Honoring MDD; IP provisioning mode = IPv4 Mon Dec 23 18:31:10 2013 Critical (3) No Ranging Response received - T3 time-out;CM-MAC=1c:3e:84:f1... Mon Dec 23 18:28:57 2013 Critical (3) Received Response to Broadcast Maintenance Request, But no Un... Mon Dec 23 18:28:25 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ... Mon Dec 23 12:17:48 2013 Notice (6) TLV-11 - unrecognized OID;CM-MAC=1c:3e:84:f1:6b:84;CMTS-MAC=0... Mon Dec 23 12:17:48 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.9;CM-MAC=1c:3e:... Mon Dec 23 12:17:48 2013 Error (4) Missing BP Configuration Setting TLV Type: 17.8;CM-MAC=1c:3e:... Mon Dec 23 12:17:48 2013 Warning (5) DHCP WARNING - Non-critical field invalid in response ;CM-MAC... Mon Dec 23 18:17:48 2013 Notice (6) Honoring MDD; IP provisioning mode = IPv4 Mon Dec 23 18:16:58 2013 Critical (3) No Ranging Response received - T3 time-out;CM-MAC=1c:3e:84:f1... Mon Dec 23 18:16:15 2013 Critical (3) Received Response to Broadcast Maintenance Request, But no Un... Mon Dec 23 18:15:43 2013 Critical (3) Started Unicast Maintenance Ranging - No Response received - ...

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • how to uninstall mariadb and re-install mysql ? Mysql install turns into mariadb install

    - by Suma
    I recently upgraded my centos system via the desktop. mistake! I had mariadb, phpmyadmin working just fine before - but after the upgrade they stopped. I frantically googled and tried to follow some tutorials about mariadb * mysql reinstall untill I came to this one: http://centosforge.com/node/how-replace-mysql-mariadb-centos-6-including-mysql-uninstall-instructions-and-yum-install I executed this command to remove all of mysql: yum remove mysql-server mysql-libs mysql-devel mysql* and then tried to reinstall mysql: as below - it crashes with errors as follows: ***************************************************************** [root@localhost ~]# yum install mysql-server mysql mysql-devel ***************************************************************** Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.serverspace.co.uk * extras: centos.serverspace.co.uk * rpmforge: www.mirrorservice.org * updates: mirror.rmg.io Setting up Install Process Package mysql-server is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql is obsoleted by MariaDB-server, trying to install MariaDB-server-5.5.29-1.i686 instead Package mysql-devel is obsoleted by MariaDB-devel, trying to install MariaDB-devel-5.5.29-1.i686 instead Resolving Dependencies --> Running transaction check ---> Package MariaDB-devel.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-common for package: MariaDB-devel ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-common.i686 0:5.5.29-1 set to be updated --> Processing Dependency: MariaDB-compat for package: MariaDB-common ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Running transaction check ---> Package MariaDB-compat.i686 0:5.5.29-1 set to be updated ---> Package MariaDB-server.i686 0:5.5.29-1 set to be updated --> Processing Dependency: libssl.so.10 for package: MariaDB-server --> Processing Dependency: libcrypto.so.10 for package: MariaDB-server --> Finished Dependency Resolution MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) MariaDB-server-5.5.29-1.i686 from mariadb has depsolving problems --> Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libcrypto.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) Error: Missing Dependency: libssl.so.10 is needed by package MariaDB-server-5.5.29-1.i686 (mariadb) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest [root@localhost ~] If I now try to install libssl.10, i get asked to install glibc libraries. 2.17 and 2.7 - other discussions have said to stay clear of the as this will explode my system - I tried download 2.17 and it's huge - took ages to unzip. Could someone please help me to completelty remove maraidb and install mysql - so that I don't get the above errors and pushed over to mariadb when I run: yum install mysql-server mysql mysql-devel There are tons of material on how to install mariadb - but none i found so far that plainly explains how to go backwards to mysql.

    Read the article

  • php-common causing conflict

    - by Cornwell
    I'm trying to install php-pdo but it always fails because of php-common bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. Installing php-common I get: bash-3.2# yum install php-common Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * epel: fedora-epel.mirror.lstn.net Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do Searched here and google but couldn't find anything that worked EDIT: Added new repositories: bash-3.2# yum install php-common Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: mirror.5ninesolutions.com * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirror.team-cymru.org addons | 1.9 kB 00:00 base | 1.1 kB 00:00 centosplus | 1.9 kB 00:00 centosplus/primary_db | 53 kB 00:01 contrib | 1.9 kB 00:00 contrib/primary_db | 1.1 kB 00:00 epel | 3.6 kB 00:00 extras | 2.1 kB 00:00 nginx | 2.5 kB 00:00 update | 1.9 kB 00:00 update/primary_db | 84 kB 00:04 updates | 1.9 kB 00:00 Setting up Install Process Package matching php-common-5.1.6-40.el5_9.i386 already installed. Checking for update. Nothing to do And then... bash-3.2# yum install php-pdo Loaded plugins: fastestmirror Repository base is listed more than once in the configuration Repository addons is listed more than once in the configuration Repository extras is listed more than once in the configuration Repository centosplus is listed more than once in the configuration Repository contrib is listed more than once in the configuration Loading mirror speeds from cached hostfile * addons: mirror.raystedman.net * base: centos.mirror.lstn.net * centosplus: mirror.anl.gov * contrib: yum.singlehop.com * epel: fedora-epel.mirror.lstn.net * extras: centos.unmeteredvps.net * update: mirrors.loosefoot.com Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php-pdo.i386 0:5.1.6-40.el5_9 set to be updated --> Processing Dependency: php-common = 5.1.6-40.el5_9 for package: php-pdo --> Finished Dependency Resolution php-pdo-5.1.6-40.el5_9.i386 from base has depsolving problems --> Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) Error: Missing Dependency: php-common = 5.1.6-40.el5_9 is needed by package php-pdo-5.1.6-40.el5_9.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • Detecting HTML5/CSS3 Features using Modernizr

    - by dwahlin
    HTML5, CSS3, and related technologies such as canvas and web sockets bring a lot of useful new features to the table that can take Web applications to the next level. These new technologies allow applications to be built using only HTML, CSS, and JavaScript allowing them to be viewed on a variety of form factors including tablets and phones. Although HTML5 features offer a lot of promise, it’s not realistic to develop applications using the latest technologies without worrying about supporting older browsers in the process. If history has taught us anything it’s that old browsers stick around for years and years which means developers have to deal with backward compatibility issues. This is especially true when deploying applications to the Internet that target the general public. This begs the question, “How do you move forward with HTML5 and CSS3 technologies while gracefully handling unsupported features in older browsers?” Although you can write code by hand to detect different HTML5 and CSS3 features, it’s not always straightforward. For example, to check for canvas support you need to write code similar to the following:   <script> window.onload = function () { if (canvasSupported()) { alert('canvas supported'); } }; function canvasSupported() { var canvas = document.createElement('canvas'); return (canvas.getContext && canvas.getContext('2d')); } </script> If you want to check for local storage support the following check can be made. It’s more involved than it should be due to a bug in older versions of Firefox. <script> window.onload = function () { if (localStorageSupported()) { alert('local storage supported'); } }; function localStorageSupported() { try { return ('localStorage' in window && window['localStorage'] != null); } catch(e) {} return false; } </script> Looking through the previous examples you can see that there’s more than meets the eye when it comes to checking browsers for HTML5 and CSS3 features. It takes a lot of work to test every possible scenario and every version of a given browser. Fortunately, you don’t have to resort to writing custom code to test what HTML5/CSS3 features a given browser supports. By using a script library called Modernizr you can add checks for different HTML5/CSS3 features into your pages with a minimal amount of code on your part. Let’s take a look at some of the key features Modernizr offers.   Getting Started with Modernizr The first time I heard the name “Modernizr” I thought it “modernized” older browsers by added missing functionality. In reality, Modernizr doesn’t actually handle adding missing features or “modernizing” older browsers. The Modernizr website states, “The name Modernizr actually stems from the goal of modernizing our development practices (and ourselves)”. Because it relies on feature detection rather than browser sniffing (a common technique used in the past – that never worked that great), Modernizr definitely provides a more modern way to test features that a browser supports and can even handle loading additional scripts called shims or polyfills that fill in holes that older browsers may have. It’s a great tool to have in your arsenal if you’re a web developer. Modernizr is available at http://modernizr.com. Two different types of scripts are available including a development script and custom production script. To generate a production script, the site provides a custom script generation tool rather than providing a single script that has everything under the sun for HTML5/CSS3 feature detection. Using the script generation tool you can pick the specific test functionality that you need and ignore everything that you don’t need. That way the script is kept as small as possible. An example of the custom script download screen is shown next. Notice that specific CSS3, HTML5, and related feature tests can be selected. Once you’ve downloaded your custom script you can add it into your web page using the standard <script> element and you’re ready to start using Modernizr. <script src="Scripts/Modernizr.js" type="text/javascript"></script>   Modernizr and the HTML Element Once you’ve add a script reference to Modernizr in a page it’ll go to work for you immediately. In fact, by adding the script several different CSS classes will be added to the page’s <html> element at runtime. These classes define what features the browser supports and what features it doesn’t support. Features that aren’t supported get a class name of “no-FeatureName”, for example “no-flexbox”. Features that are supported get a CSS class name based on the feature such as “canvas” or “websockets”. An example of classes added when running a page in Chrome is shown next:   <html class=" js flexbox canvas canvastext webgl no-touch geolocation postmessage websqldatabase indexeddb hashchange history draganddrop websockets rgba hsla multiplebgs backgroundsize borderimage borderradius boxshadow textshadow opacity cssanimations csscolumns cssgradients cssreflections csstransforms csstransforms3d csstransitions fontface generatedcontent video audio localstorage sessionstorage webworkers applicationcache svg inlinesvg smil svgclippaths"> Here’s an example of what the <html> element looks like at runtime with Internet Explorer 9:   <html class=" js no-flexbox canvas canvastext no-webgl no-touch geolocation postmessage no-websqldatabase no-indexeddb hashchange no-history draganddrop no-websockets rgba hsla multiplebgs backgroundsize no-borderimage borderradius boxshadow no-textshadow opacity no-cssanimations no-csscolumns no-cssgradients no-cssreflections csstransforms no-csstransforms3d no-csstransitions fontface generatedcontent video audio localstorage sessionstorage no-webworkers no-applicationcache svg inlinesvg smil svgclippaths">   When using Modernizr it’s a common practice to define an <html> element in your page with a no-js class added as shown next:   <html class="no-js">   You’ll see starter projects such as HTML5 Boilerplate (http://html5boilerplate.com) or Initializr (http://initializr.com) follow this approach (see my previous post for more information on HTML5 Boilerplate). By adding the no-js class it’s easy to tell if a browser has JavaScript enabled or not. If JavaScript is disabled then no-js will stay on the <html> element. If JavaScript is enabled, no-js will be removed by Modernizr and a js class will be added along with other classes that define supported/unsupported features. Working with HTML5 and CSS3 Features You can use the CSS classes added to the <html> element directly in your CSS files to determine what style properties to use based upon the features supported by a given browser. For example, the following CSS can be used to render a box shadow for browsers that support that feature and a simple border for browsers that don’t support the feature: .boxshadow #MyContainer { border: none; -webkit-box-shadow: #666 1px 1px 1px; -moz-box-shadow: #666 1px 1px 1px; } .no-boxshadow #MyContainer { border: 2px solid black; }   If a browser supports box-shadows the boxshadow CSS class will be added to the <html> element by Modernizr. It can then be associated with a given element. This example associates the boxshadow class with a div with an id of MyContainer. If the browser doesn’t support box shadows then the no-boxshadow class will be added to the <html> element and it can be used to render a standard border around the div. This provides a great way to leverage new CSS3 features in supported browsers while providing a graceful fallback for older browsers. In addition to using the CSS classes that Modernizr provides on the <html> element, you also use a global Modernizr object that’s created. This object exposes different properties that can be used to detect the availability of specific HTML5 or CSS3 features. For example, the following code can be used to detect canvas and local storage support. You can see that the code is much simpler than the code shown at the beginning of this post. It also has the added benefit of being tested by a large community of web developers around the world running a variety of browsers.   $(document).ready(function () { if (Modernizr.canvas) { //Add canvas code } if (Modernizr.localstorage) { //Add local storage code } }); The global Modernizr object can also be used to test for the presence of CSS3 features. The following code shows how to test support for border-radius and CSS transforms:   $(document).ready(function () { if (Modernizr.borderradius) { $('#MyDiv').addClass('borderRadiusStyle'); } if (Modernizr.csstransforms) { $('#MyDiv').addClass('transformsStyle'); } });   Several other CSS3 feature tests can be performed such as support for opacity, rgba, text-shadow, CSS animations, CSS transitions, multiple backgrounds, and more. A complete list of supported HTML5 and CSS3 tests that Modernizr supports can be found at http://www.modernizr.com/docs.   Loading Scripts using Modernizr In cases where a browser doesn’t support a specific feature you can either provide a graceful fallback or load a shim/polyfill script to fill in missing functionality where appropriate (more information about shims/polyfills can be found at https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills). Modernizr has a built-in script loader that can be used to test for a feature and then load a script if the feature isn’t available. The script loader is built-into Modernizr and is also available as a standalone yepnope script (http://yepnopejs.com). It’s extremely easy to get started using the script loader and it can really simplify the process of loading scripts based on the availability of a particular browser feature. To load scripts dynamically you can use Modernizr’s load() function which accepts properties defining the feature to test (test property), the script to load if the test succeeds (yep property), the script to load if the test fails (nope property), and a script to load regardless of if the test succeeds or fails (both property). An example of using load() with these properties is show next: Modernizr.load({ test: Modernizr.canvas, yep: 'html5CanvasAvailable.js’, nope: 'excanvas.js’, both: 'myCustomScript.js' }); In this example Modernizr is used to not only load scripts but also to test for the presence of the canvas feature. If the target browser supports the HTML5 canvas then the html5CanvasAvailable.js script will be loaded along with the myCustomScript.js script (use of the yep property in this example is a bit contrived – it was added simply to demonstrate how the property can be used in the load() function). Otherwise, a polyfill script named excanvas.js will be loaded to add missing canvas functionality for Internet Explorer versions prior to 9. Once excanvas.js is loaded the myCustomScript.js script will be loaded. Because Modernizr handles loading scripts, you can also use it in creative ways. For example, you can use it to load local scripts when a 3rd party Content Delivery Network (CDN) such as one provided by Google or Microsoft is unavailable for whatever reason. The Modernizr documentation provides the following example that demonstrates the process for providing a local fallback for jQuery when a CDN is down:   Modernizr.load([ { load: '//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.js', complete: function () { if (!window.jQuery) { Modernizr.load('js/libs/jquery-1.6.4.min.js'); } } }, { // This will wait for the fallback to load and // execute if it needs to. load: 'needs-jQuery.js' } ]); This code attempts to load jQuery from the Google CDN first. Once the script is downloaded (or if it fails) the function associated with complete will be called. The function checks to make sure that the jQuery object is available and if it’s not Modernizr is used to load a local jQuery script. After all of that occurs a script named needs-jQuery.js will be loaded. Conclusion If you’re building applications that use some of the latest and greatest features available in HTML5 and CSS3 then Modernizr is an essential tool. By using it you can reduce the amount of custom code required to test for browser features and provide graceful fallbacks or even load shim/polyfill scripts for older browsers to help fill in missing functionality. 

    Read the article

  • System.Web.Services.Protocols.SoapException - Security perssmission issue

    - by Hiscal
    Can any one help me to resolve this error.My website hosted on shared environment. Server Error in '/' Application. System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.Services.Protocols.SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace --- Source Error: Line 446: Line 447: oPayment.PayCurrency = "USD"; Line 448: oResult = oService.BookGolfCourse(oGolfItem, oGolfplayer, oPayment); Line 449: Response.Write(oResult.RetMsg); Line 450: Source File: c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs Line: 448 Stack Trace: [SoapException: System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) at System.Security.CodeAccessPermission.Demand() at System.Net.ServicePointManager.set_CertificatePolicy(ICertificatePolicy value) at BirdieThis.WebService.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The first permission that failed was: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The demand was for: <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="UnmanagedCode"/> The granted set of the failing assembly was: <PermissionSet class="System.Security.PermissionSet" version="1"> <IPermission class="System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="TEMP;TMP;USERNAME;OS;COMPUTERNAME"/> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Read="D:\Hosting\5457055\html" Write="d:\content\;d:\hosting\" Append="D:\Hosting\5457055\html" PathDiscovery="d:\hosting\"/> <IPermission class="System.Security.Permissions.IsolatedStorageFilePermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807"/> <IPermission class="System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="RestrictedMemberAccess"/> <IPermission class="System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration"/> <IPermission class="System.Security.Permissions.UrlIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Url="file:///D:/Hosting/5457055/html/bin/App_Code.DLL"/> <IPermission class="System.Security.Permissions.ZoneIdentityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Zone="MyComputer"/> <IPermission class="System.Security.Permissions.KeyContainerPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Level="Medium"/> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.DnsPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Drawing.Printing.PrintingPermission, System.Drawing, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Level="DefaultPrinting"/> <IPermission class="System.Net.Mail.SmtpPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Access="Connect"/> <IPermission class="System.Data.SqlClient.SqlClientPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.OleDb.OleDbPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <URI uri="http://.*"/> <URI uri="https://.*"/> </ConnectAccess> </IPermission> <IPermission class="System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1"> <ConnectAccess> <ENDPOINT host="*.*.*.*" transport="Tcp" port="3306"/> </ConnectAccess> </IPermission> </PermissionSet> The assembly or AppDomain that failed was: App_Code, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null The method that caused the failure was: golfswitchs.BookGolfResult BookGolfCourse(mygolf.CourseBooking, mygolf.CoursePlayer, mygolf.CoursePayment) The Zone of the assembly that failed was: MyComputer The Url of the assembly that failed was: file:///D:/Hosting/5457055/html/bin/App_Code.DLL --- End of inner exception stack trace ---] System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) +431766 System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) +204 mygolf.golfService.BookGolfCourse(CourseBooking oCourseInfo, CoursePlayer oCoursePlayer, CoursePayment oCoursePayment) +80 birdiethis.web.test.BookClub() in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:448 birdiethis.web.test.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\cfmdeveloper.com\subdomains\ind103\httpdocs\test.aspx.cs:28 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnLoad(EventArgs e) +99 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 Version Information: Microsoft .NET Framework Version:2.0.50727.3603; ASP.NET Version:2.0.50727.3082

    Read the article

  • Why does SQL 2005 SSIS component install fail?

    - by Ducain
    I am trying to install SSIS on our production SQL 2005 SP2 box. Each time I try, the install/setup screen results in failure, starting with the native client, and moving on down. Screen shots below show what I see: Here is the result of clicking on the status link to the right of the native client after the install failed: === Verbose logging started: 3/28/2012 16:38:08 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:00) [16:38:08:875]: Resetting cached policy values MSI (c) (DC:00) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (c) (DC:00) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (c) (DC:00) [16:38:08:875]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:00) [16:38:08:875]: Grabbed execution mutex. MSI (c) (DC:00) [16:38:08:875]: Cloaking enabled. MSI (c) (DC:00) [16:38:08:875]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:00) [16:38:08:875]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:F0) [16:38:08:875]: Grabbed execution mutex. MSI (s) (90:D4) [16:38:08:875]: Resetting cached policy values MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'Debug' is 0 MSI (s) (90:D4) [16:38:08:875]: ******* RunEngine: ******* Product: {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} ******* Action: ******* CommandLine: ********** MSI (s) (90:D4) [16:38:08:875]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:D4) [16:38:08:890]: Warning: Local cached package 'C:\WINDOWS\Installer\65eb99.msi' is missing. MSI (s) (90:D4) [16:38:08:890]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:D4) [16:38:08:890]: User policy value 'DisableMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Looking for sourcelist for product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Adding {F9B3DD02-B0B3-42E9-8650-030DFF0D133D}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Now checking product {F9B3DD02-B0B3-42E9-8650-030DFF0D133D} MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Trying source C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\Cache\. MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Source is invalid due to invalid package code (product code doesn't match). MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483646 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing net source list. MSI (s) (90:D4) [16:38:08:890]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:08:890]: SOURCEMGMT: Processing media source list. MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 2203 2: F:\sqlncli.msi 3: -2147287038 MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Processing URL source list. MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: -2147483647 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: Note: 1: 1706 2: 3: sqlncli.msi MSI (s) (90:D4) [16:38:09:921]: SOURCEMGMT: Failed to resolve source MSI (s) (90:D4) [16:38:09:921]: MainEngineThread is returning 1612 MSI (c) (DC:00) [16:38:09:921]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:00) [16:38:09:921]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:09 === Here is the log visible when I click the failed status for MSXML6: === Verbose logging started: 3/28/2012 16:38:12 Build type: SHIP UNICODE 3.01.4000.4042 Calling process: C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\setup.exe === MSI (c) (DC:58) [16:38:12:250]: Resetting cached policy values MSI (c) (DC:58) [16:38:12:250]: Machine policy value 'Debug' is 0 MSI (c) (DC:58) [16:38:12:250]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (c) (DC:58) [16:38:12:250]: Client-side and UI is none or basic: Running entire install on the server. MSI (c) (DC:58) [16:38:12:250]: Grabbed execution mutex. MSI (c) (DC:58) [16:38:12:250]: Cloaking enabled. MSI (c) (DC:58) [16:38:12:250]: Attempting to enable all disabled priveleges before calling Install on Server MSI (c) (DC:58) [16:38:12:250]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (90:58) [16:38:12:265]: Grabbed execution mutex. MSI (s) (90:DC) [16:38:12:265]: Resetting cached policy values MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'Debug' is 0 MSI (s) (90:DC) [16:38:12:265]: ******* RunEngine: ******* Product: {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} ******* Action: ******* CommandLine: ********** MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'DisableUserInstalls' is 0 MSI (s) (90:DC) [16:38:12:265]: Warning: Local cached package 'C:\WINDOWS\Installer\ce6d56e.msi' is missing. MSI (s) (90:DC) [16:38:12:265]: User policy value 'SearchOrder' is 'nmu' MSI (s) (90:DC) [16:38:12:265]: User policy value 'DisableMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: Machine policy value 'AllowLockdownMedia' is 0 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media enabled only if package is safe. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Looking for sourcelist for product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Adding {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA}; to potential sourcelist list (pcode;disk;relpath). MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Now checking product {56EA8BC0-3751-4B93-BC9D-6651CC36E5AA} MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Media is enabled for product. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Attempting to use LastUsedSource from source list. MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Trying source d:\2a2ac35788eea9066bae01\. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 2203 2: d:\2a2ac35788eea9066bae01\msxml6.msi 3: -2147287037 MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing net source list. MSI (s) (90:DC) [16:38:12:265]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:265]: SOURCEMGMT: Processing media source list. MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Trying media source F:\. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 2203 2: F:\msxml6.msi 3: -2147287038 MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Source is invalid due to missing/inaccessible package. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Processing URL source list. MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1402 2: UNKNOWN\URL 3: 2 MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: -2147483647 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: Note: 1: 1706 2: 3: msxml6.msi MSI (s) (90:DC) [16:38:12:296]: SOURCEMGMT: Failed to resolve source MSI (s) (90:DC) [16:38:12:296]: MainEngineThread is returning 1612 MSI (c) (DC:58) [16:38:12:296]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (DC:58) [16:38:12:296]: MainEngineThread is returning 1612 === Verbose logging stopped: 3/28/2012 16:38:12 === When I click on the failed status for SSIS, no log file appears at all. To be honest, I'm not even sure where to start on this one - never guessed it would be so much trouble to add a component right from the disk. Any help or pointers whatsoever would be greatly appreciated. If any more details are needed, please ask - I'd be glad to add them.

    Read the article

  • I Can't Install or Remove Any Application

    - by berkay gürsoy
    when i try to install or remove an application via either software center or apt-get install they both fail and give some debconf errors below is the log please help.Sorry some of the text is not english. sudo apt-get install aptitude Paket listeleri okunuyor... Bitti Bagimlilik agaci insa ediliyor. Durum bilgisi okunuyor... Bitti Asagidaki ek paketler de yüklenecek: aptitude-common libboost-iostreams1.49.0 libcwidget3 Önerilen paketler: aptitude-doc-en aptitude-doc tasksel debtags libcwidget-dev Asagidaki YENI paketler kurulacak: aptitude aptitude-common libboost-iostreams1.49.0 libcwidget3 Yükseltilen: 0, Yeni Kurulan: 4, Kaldirilacak: 0 ve Yükseltilmeyecek: 48. 8 tam olarak kurulmadi veya kaldirilmadi. Indirilmesi gereken dosya boyutu 0 B/2.498 kB Bu islemden sonra 10,4 MB ek disk alani kullanilacak. Devam etmek istiyor musunuz [E/h]? e Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok Selecting previously unselected package aptitude-common. dpkg: uyari: files list file for package 'aspell' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'ubuntu-desktop' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'vuze' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'java-wrappers' missing; assuming package has no files currently installed (Veritabani okunuyor... 198988 files and directories currently installed.) Unpacking aptitude-common (from .../aptitude-common_0.6.8.1-2ubuntu1_all.deb) ... Selecting previously unselected package libboost-iostreams1.49.0. Unpacking libboost-iostreams1.49.0 (from .../libboost-iostreams1.49.0_1.49.0-3.1ubuntu1_amd64.deb) ... Selecting previously unselected package libcwidget3. Unpacking libcwidget3 (from .../libcwidget3_0.5.16-3.4ubuntu1_amd64.deb) ... Selecting previously unselected package aptitude. Unpacking aptitude (from .../aptitude_0.6.8.1-2ubuntu1_amd64.deb) ... wicd-daemon (1.7.2.4-2ubuntu1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing wicd-daemon (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 man-db (2.6.3-1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing man-db (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dictionaries-common (1.12.10) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing dictionaries-common (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dpkg: dependency problems prevent configuration of aspell: aspell depends on dictionaries-common (>> 0.40); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of aspell-en: aspell-en depends on aspell (>= 0.60.3-2); bununla beraber: Package aspell is not configured yet. aspell-en depends on dictionaries-common (>= 0.49.2); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell-en (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of hyphen-en-us: hyphen-en-us depends on dictionaries-common (>= 0.10) | openoffice.org-updatedicts; bununla beraber: Package dictionaries-common is not configured yet. openoffice.org-updatedicts paketi yüklenmedi. Package dictionaries-common which provides openoffice.org-updatedicts is not configured yet. dpkg: error processing hyphen-en-us (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd-gtk: wicd-gtk depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. dpkg: error processing wicd-gtk (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd: wicd depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. wicd depends on wicd-gtk (= 1.7.2.4-2ubuntu1) | wicd-curses (= 1.7.2.4-2ubuntu1) | wicd-cli (= 1.7.2.4-2ubuntu1) | wicd-client; bununla beraber: Package wicd-gtk is not configured yet. wicd-curses paketi yüklenmedi. wicd-cli paketi yüklenmedi. wicd-client paketi yüklenmedi. Package wicd-gtk which provides wicd-client is not configured yet. dpkg: error processing wicd (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor aptitude-common (0.6.8.1-2ubuntu1) kuruluyor... libboost-iostreams1.49.0 (1.49.0-3.1ubuntu1) kuruluyor... libcwidget3 (0.5.16-3.4ubuntu1) kuruluyor... aptitude (0.6.8.1-2ubuntu1) kuruluyor... update-alternatives: using /usr/bin/aptitude-curses to provide /usr/bin/aptitude (aptitude) in Otomatik Mod Processing triggers for libc-bin ... ldconfig deferred processing now taking place Islem sirasinda hatalar bulundu: wicd-daemon man-db dictionaries-common aspell aspell-en hyphen-en-us wicd-gtk wicd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • A Gentle Introduction to NuGet

    - by Joe Mayo
    Not too long ago, Microsoft released, NuGet, an automated package manager for Visual Studio.  NuGet makes it easy to download and install assemblies, and their references, into a Visual Studio project.  These assemblies, which I loosely refer to as packages, are often open source, and include projects such as LINQ to Twitter. In this post, I'll explain how to get started in using NuGet with your projects to include: installng NuGet, installing/uninstalling LINQ to Twitter via console command, and installing/uninstalling LINQ to Twitter via graphical reference menu. Installing NuGet The first step you'll need to take is to install NuGet.  Visit the NuGet site, at http://nuget.org/, click on the Install NuGet button, and download the NuGet.Tools.vsix installation file, shown below. Each browser is different (i.e. FireFox, Chrome, IE, etc), so you might see options to run right away, save to a location, or access to the file through the browser's download manager.  Regardless of how you receive the NuGet installer, execute the downloaded NuGet.Tools.vsix to install Nuget into visual Studio. The NuGet Footprint When you open visual Studio, observe that there is a new menu option on the Tools menu, titled Library Package Manager; This is where you use NuGet.  There are two menu options, from the Library Package Manager Menu that you can use: Package Manager Console and Package Manager Settings.  I won't discuss Package Manager Settings in this post, except to give you a general idea that, as one of a set of capabilities, it manages the path to the NuGet server, which is already set for you. Another menu, added by the NuGet installer, is Add Library Package Reference, found by opening the context menu for either a Solution Explorer project or a project's References folder or via the Project menu.  I'll discuss how to use this later in the post. The following discussion is concerned with the other menu option, Package Manager Console, which allows you to manage NuGet packages. Gettng a NuGet Package Selecting Tools -> Library Package Manager -> Package Manager Console opens the Package Manager Console.  As you can see, below, the Package Manager Console is text-based and you'll need to type in commands to work with packages. In this post, I'll explain how to use the Package Manager Console to install LINQ to Twitter, but there are many more commands, explained in the NuGet Package Manager Console Commands documentation.  To install LINQ to Twitter, open your current project where you want LINQ to Twitter installed, and type the following at the PM> prompt: Install-Package linqtotwitter If all works well, you'll receive a confirmation message, similar to the following, after a brief pause: Successfully installed 'linqtotwitter 2.0.20'. Successfully added 'linqtotwitter 2.0.20' to NuGetInstall. Also, observe that a reference to the LinqToTwitter.dll assembly was added to your current project. Uninstalling a NuGet Package I won't be so bold as to assume that you would only want to use LINQ to Twitter because there are other Twitter libraries available; I recommend Twitterizer if you don't care for LINQ to Twitter.  So, you might want to use the following command at the PM> prompt to remove LINQ to Twitter from your project: Uninstall-Package linqtotwitter After a brief pause, you'll see a confirmation message similar to the following: Successfully removed 'linqtotwitter 2.0.20' from NuGetInstall. Also, observe that the LinqToTwitter.dll assembly no longer appears in your project references list. Sometimes using the Package Manager Console is required for more sophisticated scenarios.  However, LINQ to Twitter doesn't have any dependencies and is a very simple install, so you can use another method of installing graphically, which I'll show you next. Graphical Installations As explained earlier, clicking Add Library Package Reference, from the context menu for either a Solution Explorer project or a project's References folder or via the Project menu opens the Add Library Package Reference window. This window will allow you to add a reference a NuGet package in your project. To the left of the window are a few accordian folders to help you find packages that are either on-line or already installed.  Just like the previous section, I'll assume you are installing LINQ to Twitter for the first time, so you would select the Online folder and click All.  After waiting for package descriptions to download, you'll notice that there are too many to scroll through in a short period of time, over 900 as I write this.  Therefore, use the search box located at the top right corner of the window and type LINQ to Twitter as I've done in the previous figure. You'll see LINQ to Twitter appear in the list. Click the Install button on the LINQ to Twitter entry. If the installation was successful, you'll see a message box display and disappear quickly (or maybe not if your machine is very fast or you blink at that moment). Then you'll see a reference to the LinqToTwitter.dll assembly in your project's references list. Note: While running this demo, I ran into an issue where VS had created a file lock on an installation folder without releasing it, causing an error with "packagename already exists. Skipping..." and then an error describing that it couldn't write to a destination folder.  I resolved the problem by closing and reopening VS. If you open the Add a Library Package Reference window again, you'll see LINQ to Twitter listed in the Recent packages folder. Summary You can install NuGet via the on-line home page with a click of a button.  Nuget provides two ways to work with packages, via console or graphical window.  While the graphical window is easiest, the console window is more powerful. You can now quickly add project references to many available packages via the NuGet service. Joe

    Read the article

  • PHP Pear Installation on CentOS

    - by Prabhakar
    [root@ip ~]# yum install php-pear* Reducing CentOS-5 Testing to included packages only Finished Setting up Install Process Package 1:php-pear-1.8.1-2.el5.centos.noarch already installed and latest versio n Package php-pear-XML-Util is obsoleted by php-pear, trying to install 1:php-pear -1.8.1-2.el5.centos.noarch instead Package 1:php-pear-1.8.1-2.el5.centos.noarch already installed and latest versio n Package php-pear-DB is obsoleted by php-pear-db, trying to install php-pear-db-1 .7.13-2.el5.rf.noarch instead Resolving Dependencies --> Running transaction check ---> Package php-pear-Auth-RADIUS.noarch 0:1.0.6-1.el5 set to be updated --> Processing Dependency: php-pecl(radius) >= 1.2.5 for package: php-pear-Auth- RADIUS --> Processing Dependency: php-mcrypt for package: php-pear-Auth-RADIUS ---> Package php-pear-Auth-SASL.noarch 0:1.0.4-1.el5 set to be updated ---> Package php-pear-Benchmark.noarch 0:1.2.7-1.el5 set to be updated ---> Package php-pear-CAS.noarch 0:1.1.3-1.el5 set to be updated --> Processing Dependency: php-domxml-php4-php5 for package: php-pear-CAS ---> Package php-pear-Cache-Lite.noarch 0:1.7.5-1.el5 set to be updated ---> Package php-pear-CodeGen.noarch 0:1.0.7-3.el5 set to be updated ---> Package php-pear-CodeGen-PECL.noarch 0:1.1.3-3.el5 set to be updated ---> Package php-pear-Console-CommandLine.noarch 0:1.1.3-3.el5 set to be updated ---> Package php-pear-Console-Getargs.noarch 0:1.3.5-1.el5 set to be updated ---> Package php-pear-Console-ProgressBar.noarch 0:0.5.2-0.2.beta.el5 set to be updated ---> Package php-pear-Console-Table.noarch 0:1.1.1-1.el5 set to be updated ---> Package php-pear-Crypt-Blowfish.noarch 0:1.0.1-1.el5 set to be updated ---> Package php-pear-Crypt-CHAP.noarch 0:1.0.2-1.el5 set to be updated ---> Package php-pear-DB-DataObject.noarch 0:1.8.12-1.el5 set to be updated ---> Package php-pear-DB-DataObject-FormBuilder.noarch 0:1.0.0-1.el5 set to be u pdated ---> Package php-pear-DB-QueryTool.noarch 0:1.1.2-1.el5 set to be updated ---> Package php-pear-Date.noarch 0:1.4.7-2.el5.centos set to be updated ---> Package php-pear-Date-Holidays.noarch 0:0.21.4-1.el5 set to be updated ---> Package php-pear-Date-Holidays-USA.noarch 0:0.1.1-1.el5 set to be updated ---> Package php-pear-Event-Dispatcher.noarch 0:1.1.0-1.el5 set to be updated ---> Package php-pear-File.noarch 0:1.2.2-1.el5.centos set to be updated ---> Package php-pear-File-Find.noarch 0:1.3.0-1.el5 set to be updated ---> Package php-pear-File-Passwd.noarch 0:1.1.7-1.el5 set to be updated ---> Package php-pear-File-SMBPasswd.noarch 0:1.0.3-1.el5 set to be updated ---> Package php-pear-HTML-Common.noarch 0:1.2.5-1.el5 set to be updated ---> Package php-pear-HTML-QuickForm.noarch 0:3.2.12-1.el5 set to be updated ---> Package php-pear-HTML-QuickForm-ElementGrid.noarch 0:0.1.1-1.el5 set to be updated ---> Package php-pear-HTML-QuickForm-advmultiselect.noarch 0:1.4.1-1.el5 set to be updated ---> Package php-pear-HTML-Table.noarch 0:1.7.5-1.el5 set to be updated ---> Package php-pear-HTML-Template-IT.noarch 0:1.3.0-2.el5 set to be updated ---> Package php-pear-HTML_Template_PHPLIB.noarch 0:1.4.0-2.el5 set to be update d ---> Package php-pear-HTTP.noarch 0:1.4.0-7.el5 set to be updated ---> Package php-pear-HTTP-Client.noarch 0:1.1.1-1.el5 set to be updated ---> Package php-pear-HTTP-Request.noarch 0:1.4.4-1.el5 set to be updated ---> Package php-pear-HTTP-Upload.noarch 0:0.9.1-2.el5 set to be updated ---> Package php-pear-Image-Canvas.noarch 0:0.3.1-1.el5 set to be updated ---> Package php-pear-Image-Color.noarch 0:1.0.3-1.el5 set to be updated ---> Package php-pear-Image-Graph.noarch 0:0.8.0-1.el5 set to be updated ---> Package php-pear-Image-GraphViz.noarch 0:1.2.1-4.el5 set to be updated --> Processing Dependency: graphviz for package: php-pear-Image-GraphViz ---> Package php-pear-Log.noarch 0:1.12.7-1.el5 set to be updated ---> Package php-pear-MDB2.noarch 0:2.4.1-2.el5.centos set to be updated ---> Package php-pear-MDB2-Driver-mysql.noarch 0:1.4.1-3.el5.centos set to be up dated ---> Package php-pear-MDB2-Driver-pgsql.noarch 0:1.4.1-1.el5 set to be updated ---> Package php-pear-MDB2-Schema.noarch 0:0.8.0-2.el5 set to be updated ---> Package php-pear-Mail.noarch 0:1.1.14-5.el5.1 set to be updated ---> Package php-pear-Mail-Mime.noarch 0:1.4.0-1.el5.centos set to be updated ---> Package php-pear-Math-Stats.noarch 0:0.9.0-0.1.beta3.el5 set to be updated ---> Package php-pear-Net-Curl.noarch 0:1.2.5-1.el5 set to be updated ---> Package php-pear-Net-DIME.noarch 0:1.0.1-1.el5 set to be updated ---> Package php-pear-Net-FTP.noarch 0:1.3.4-1.el5 set to be updated ---> Package php-pear-Net-POP3.noarch 0:1.3.7-1.el5 set to be updated ---> Package php-pear-Net-Ping.noarch 0:2.4.5-1.el5 set to be updated ---> Package php-pear-Net-SMTP.noarch 0:1.4.4-1.el5 set to be updated ---> Package php-pear-Net-Sieve.noarch 0:1.3.2-1.el5 set to be updated ---> Package php-pear-Net-Socket.noarch 0:1.0.10-1.el5 set to be updated ---> Package php-pear-Net-Traceroute.noarch 0:0.21.3-1.el5 set to be updated ---> Package php-pear-Net-URL.noarch 0:1.0.15-1.el5.centos set to be updated ---> Package php-pear-Net-URL-Mapper.noarch 0:0.9.0-2.el5.1 set to be updated ---> Package php-pear-Net-URL2.noarch 0:0.3.0-1.el5 set to be updated ---> Package php-pear-Net-UserAgent-Detect.noarch 0:2.5.2-1.el5 set to be update d ---> Package php-pear-Numbers-Roman.noarch 0:1.0.2-2.el5 set to be updated ---> Package php-pear-Numbers-Words.noarch 0:0.16.1-1.el5 set to be updated ---> Package php-pear-OLE.noarch 0:1.0.0-0.4.rc1.el5 set to be updated ---> Package php-pear-PHP-CodeSniffer.noarch 0:1.2.2-1.el5 set to be updated ---> Package php-pear-PHP-Compat.noarch 0:1.5.0-1.el5 set to be updated ---> Package php-pear-PHP-CompatInfo.noarch 0:1.4.3-1.el5 set to be updated ---> Package php-pear-PHPUnit.noarch 0:3.3.5-2.el5 set to be updated --> Processing Dependency: php-pecl(Xdebug) >= 2.0.0 for package: php-pear-PHPUn it --> Processing Dependency: php-channel(pear.phpunit.de) for package: php-pear-PH PUnit ---> Package php-pear-Pager.noarch 0:2.4.8-1.el5 set to be updated ---> Package php-pear-Payment-Process.noarch 0:0.6.6-1.el5 set to be updated ---> Package php-pear-Phlickr.noarch 0:0.2.7-2.el5 set to be updated ---> Package php-pear-PhpDocumentor.noarch 0:1.4.3-1.el5 set to be updated --> Processing Dependency: php-Smarty >= 2.6.0 for package: php-pear-PhpDocument or ---> Package php-pear-PhpDocumentor-docs.noarch 0:1.4.3-1.el5 set to be updated ---> Package php-pear-SOAP.noarch 0:0.11.0-2.el5 set to be updated ---> Package php-pear-Spreadsheet-Excel-Writer.noarch 0:0.9.2-2.el5 set to be up dated ---> Package php-pear-Structures-DataGrid.noarch 0:0.8.3-1.el5 set to be updated ---> Package php-pear-Structures-DataGrid-DataSource-Array.noarch 0:0.1.3-1.el5 set to be updated ---> Package php-pear-Structures-DataGrid-DataSource-DataObject.noarch 0:0.1.2-1 .el5 set to be updated ---> Package php-pear-Structures-DataGrid-DataSource-MDB2.noarch 0:0.1.10-1.el5 set to be updated ---> Package php-pear-Structures-DataGrid-DataSource-RSS.noarch 0:0.1.1-1.el5 se t to be updated ---> Package php-pear-Structures-DataGrid-Renderer-Pager.noarch 0:0.1.2-1.el5 se t to be updated ---> Package php-pear-Text-Diff.noarch 0:1.1.0-1.el5 set to be updated ---> Package php-pear-Validate.noarch 0:0.8.3-1.el5 set to be updated ---> Package php-pear-Validate-Finance-CreditCard.noarch 0:0.5.2-1.el5 set to be updated ---> Package php-pear-Var-Dump.noarch 0:1.0.3-2.el5 set to be updated ---> Package php-pear-XML-Beautifier.noarch 0:1.1-3.el5 set to be updated ---> Package php-pear-XML-Parser.noarch 0:1.2.8-1.el5 set to be updated ---> Package php-pear-XML-RSS.noarch 0:1.0.0-1.el5 set to be updated ---> Package php-pear-XML-Serializer.noarch 0:0.20.0-1.el5 set to be updated ---> Package php-pear-date.noarch 0:1.4.6-1.el5.rf set to be updated ---> Package php-pear-db.noarch 0:1.7.13-2.el5.rf set to be updated ---> Package php-pear-excel.noarch 0:0.9.0-1.el5.rf set to be updated ---> Package php-pear-file.noarch 0:1.2.2-1.el5.rf set to be updated ---> Package php-pear-log.noarch 0:1.9.3-1.el5.rf set to be updated ---> Package php-pear-mail_mime.noarch 0:1.3.1-1.el5.rf set to be updated ---> Package php-pear-ole.noarch 0:0.5-2.el5.rf set to be updated --> Running transaction check ---> Package graphviz.i386 0:2.22.0-4.el5.rf set to be updated ---> Package php-Smarty.noarch 0:2.6.26-1.el5 set to be updated ---> Package php-channel-phpunit.noarch 0:1.0-2.el5 set to be updated ---> Package php-domxml-php4-php5.noarch 0:1.21.2-1.el5 set to be updated ---> Package php-mcrypt.i386 0:5.2.9-2.el5.centos.3 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-mcrypt ---> Package php-pecl-radius.i386 0:1.2.5-4.el5 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-pecl-radius ---> Package php-pecl-xdebug.i386 0:2.0.5-1.el5.1 set to be updated --> Processing Dependency: php-api = 20041225 for package: php-pecl-xdebug --> Finished Dependency Resolution php-pecl-xdebug-2.0.5-1.el5.1.i386 from epel has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-pecl-xdebu g-2.0.5-1.el5.1.i386 (epel) php-pecl-radius-1.2.5-4.el5.i386 from epel has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-pecl-radiu s-1.2.5-4.el5.i386 (epel) php-mcrypt-5.2.9-2.el5.centos.3.i386 from c5-testing has depsolving problems --> Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5.2 .9-2.el5.centos.3.i386 (c5-testing) Error: Missing Dependency: php-api = 20041225 is needed by package php-pecl-radi us-1.2.5-4.el5.i386 (epel) Error: Missing Dependency: php-api = 20041225 is needed by package php-mcrypt-5. 2.9-2.el5.centos.3.i386 (c5-testing) Error: Missing Dependency: php-api = 20041225 is needed by package php-pecl-xdeb ug-2.0.5-1.el5.1.i386 (epel) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >