Search Results

Search found 6007 results on 241 pages for 'sub jp'.

Page 106/241 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 26, 2012

    CodePlex Daily Summary for Tuesday, June 26, 2012Popular ReleasesSOLID by example: All examples: All solid examplesJSLint for Resharper: 0.1.4560 Beta: Deadlock removed. No locks before reading cscript output from jslint, not verified to be very safe, but at least VS doesn't crash. Happy linting. :)SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1726.406): Use of new version of connection controls for a full support of OSDP authentication mechanism for CRM Online.DotNetNuke® Form and List: 06.00.02: DotNetNuke Form and List 06.00.02 Changes in 06.00.02 The scripts are now finally compatible with SQL Azure, tested in a new instance on Azure. If you are not targetting Azure, there is no need to upgrade from 06.00.01 (it won't hurt though). Changes in 06.00.01 Icons are shown in module action buttons (workaraound to core issue with IconAPI) Fix to Token2XSL Editor, changing List type raised exception MakeTumbnail and ShowXml handlers had been missing in install package Updated ...StreamInsight Samples: StreamInsight Product Team Samples V2.1: These samples correspond to the new StreamInsight APIs introduced with V2.1.Umbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...IIS Express Manager: IIS Express 0.31 B: V0.1B - 04 May, 2012 Initiated Project. V0.2B - 05May, 2012 1. Fixed small bug. Threw error when stop button was pressed in an already stopped application. 2. Removed start and stop button. Double clicking on list items will now stop / start the websites. 3. Improved code readability. 4. Changed Orientation of Buttons in UI. V0.3B - 06May, 2012 1. Complete modification of IISEM and process ID handling 2. IISEM is now capable of reflecting the existing IISExpress processes right from startup...SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a - *Refined the menu to allow for sub category additions. *Release 0.1.0.a did not allow for sub categories. *Also added a Javascript Array Prototype to facilitate removal of duplicates in the sub category return. (the prototype is added as a workaround as I am not able to get the CAML GroupBy function to work correctly against a lookup column)CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.2: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesConfuser: Confuser 1.9: Change log: * Stable output (i.e. given the same seed & input assemblies, you'll get the same output assemblies) + Generate debug symbols, now it is possible to debug the output under a debugger! (Of course without enabling anti debug) + Generating obfuscation database, most of the obfuscation data in stored in it. + Two tools utilizing the obfuscation database (Database viewer & Stack trace decoder) * Change the protection scheme -----Please read Bug Report before you report a bug-----...XDesigner.Development: First release: First releaseBlackJumboDog: Ver5.6.5: 2012.06.22 Ver5.6.5  (1) FTP??????? EPSV ?? EPRT ???????MVVM Light Toolkit: V4RTM (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RTM. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibs An installer with binaries, snippets and templates will follow ASAP.Weapsy - ASP.NET MVC CMS: 1.0.0: - Some changes to Layout and CSS - Changed version number to 1.0.0.0 - Solved Cache and Session items handler error in IIS 7 - Created the Modules, Plugins and Widgets Areas - Replaced CKEditor with TinyMCE - Created the System Info page - Minor changesAcDown????? - AcDown Downloader Framework: AcDown????? v3.11.7: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...New Projects1000: ONE THOUSANDBackground Folder Copy for Sharepoint 2010: En esta solución se combinan varios elementos para conseguir sincronizar una carpeta local o compartida con una librería de documentos de Sharepoint 2010Cache in sandbox: The illustration for the question on Stack Overflow: http://stackoverflow.com/q/11182321/240613CCITTCodecs: A native .NET CCITT Group4 Codec. This is NOT a full featured tiff library. It only encodes pixels into group4.CharTest: Small utility that allows you to analyze a string of characters and see at-a-glance in which categories each characters belongs and other small features.EDT Attribute Editor: One of the applications in the Ecosystem Diagnosis & Treatment suite.EDT Geometry Navigator: One of the applications in the Ecosystem Diagnosis & Treatment suite.EDT Population Editor: One of the applications in the Ecosystem Diagnosis & Treatment suite.EDT Report Generator: One of the applications in the Ecosystem Diagnosis & Treatment suite.ePay payment gateway provider for NB_Store: ePay payment gateway provider for DNN module NB_StoreFizzBuzzCode: FizzBuzz coding assignmentINI Library: The IniLibrary is a simplified C# library for accessing INI files quickly and efficiently. It is able to parse virtually all INI files.jos .net sdk: jos .net sdkjQuery Metro Plug in: Simple jQuery plug in to create easy Metro UI.Metro Event To Command: Metro Event To Command is a replacement for EventToCommand behavior for Windows RT. MyMichael: My MichaelNETDeob0: .NET Deobfuscator by UbbeLoLNokia Image Downloader: Simple application to download image and video files from some source (typically phone). Application ensures that only new files will be downloaded into specified folder; previously downloaded files are ignored. Moreover, user can specify which folders will be searched for image and video files.PaRV: Parallelizing Runtime Detection and Prevention of Concurrency Errors: Will be added soonSimple Things to do website: Users can manage their own individual to-do lists by registering themselves on this site.SP Workflows: The SP Workflows utility is a tool used for monitoring a “Workflow Internal State” and “Workflow Status” for the workflows associated to a SharePoint List testdd06252012: zxctestddgit062520122: ghtestddhg06252012: zcxtesthg062520122: gfTopAddIn: Excel???? VSTOUniversal redirector: Use NTFS junctions to redirect directories, the easy way!VSE: This is a source management site for VSE before deployed.WPFClock: Simple clock app, written in WFP.www.blfoley.com Samples by Bradley Foley: A project containing various sample on quick ways to do thingsZune Connection Detector: A quick, clean method to detect if Zune is connected with Windows Phone.zy26: zy26 was here...

    Read the article

  • CodePlex Daily Summary for Saturday, August 11, 2012

    CodePlex Daily Summary for Saturday, August 11, 2012Popular Releases????: ????2.0.5: 1、?????????????。RiP-Ripper & PG-Ripper: PG-Ripper 1.4.01: changes NEW: Added Support for Clipboard Function in Mono Version NEW: Added Support for "ImgBox.com" links FIXED: "PixHub.eu" links FIXED: "ImgChili.com" links FIXED: Kitty-Kats Forum loginVirtual Keyboard: Virtual Keyboard v1.0.2: 1) Changed the background color to #FFD4D4D4 2) Increased the font size to 20. 3) Changed the font type to Times New RomanPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 5): Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version Support for Windows 8 RTM ADDITIONAL DOWNLOADSSmooth Streaming Client SD...Mugen Injection: Mugen Injection 2.6: Fixed incorrect work with children when creating the MugenInjector. Added the ability to use the IActivator after create object using MethodBinding or CustomBinding. Added new fluent syntax for MethodBinding and CustomBinding. Added new features for working with the ModuleManagerComponent. Fixed some bugs.AutoShutdown.NET: AutoShutdown.NET: This is the first release of AutoShutdown.NET marked with beta, but it's fully functional and work nice without any problem. This release has no installer and you can download and extract the zip file and use it on any machine that runs .NET framework 2.0 or later. Your suggestions and feedback are always welcomed. Contact me on imun22{at}gmail.com Hope you find it useful as i am;MyDbUtils: MyDbUtils_0.9.7.0: Refresh objects from database before generating the SQL script file.Linq2IndexedDB: Linq2IndexedDB 1.0.12: added support for nested properties in the select and orderby functions Fixed bug in sorting Refactored querybuilder added support for multiple inserts Added conditional remove Added support for merging data to multiple objects (also conditional) Added new filter: isUndefinedLearnToProgram: Teaching Kids Programming Java Eclipse v01: Open the zip Open Eclipse Choose/Switch to the included workspaceAutoLaunch for Windows Embedded Compact (CE): AutoLaunch for Compact 7 v300: What's New:In this release, the following sub-components are added to AutoLaunch_v300: - Autolaunch CoreCon - Autolaunch Remote Display application. When the "Autolaunch CoreCon" sub-component is included to an OS design, it includes the build scripts to add CoreCon files to the image and registry entries to launch CoreCon during startup, to support Visual Studio application development. When the "Autolaunch Remote Display application" sub-component is included to an OS design, it set...spUtils: spUtils_v1.0: Public Methods:If SP2010 or above: spUtils.addStatus spUtils.closeDialog spUtils.createListItems spUtils.deleteListItems spUtils.getListItems spUtils.notify spUtils.onDialogClose spUtils.openModalForm spUtils.removeNotify spUtils.updateListItems If jQuery is loaded: spUtils.getFormVal spUtils.setFormValSQLLib: Alpha release 06: Added tsql.fnrecsgenHTTP Server API Configuration: HttpSysManager 1.0: *Set Url ACL *Bind https endpoint to certificateFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 2.3.0.0: - Added support for SQLite, PostgreSQL and IBM DB2. - Added new method, QueryDataTable which returns the query result as a datatable. - Fixed some issues. - Some refactoring. - Select builder with support for paging and improved support for auto mapping.JSON C# Class Generator: JSON CSharp Class Generator 1.3: Support for native JSON.net serializer/deserializer (POCO) New classes layout option: nested classes Better handling of secondary classesAxiom 3D Rendering Engine: v0.8.3376.12322: Changes Since v0.8.3102.12095 ===================================================================== Updated ndoc3 binaries to fix bug Added uninstall.ps1 to nuspec packages fixed revision component in version numbering Fixed sln referencing VS 11 Updated OpenTK Assemblies Added CultureInvarient to numeric parsing Added First Visual Studio 2010 Project Template (DirectX9) Updated SharpInputSystem Assemblies Backported fix for OpenGL Auto-created window not responding to input Fixed freeInterna...DotSpatial: DotSpatial 1.3: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...BugNET Issue Tracker: BugNET 1.0: This release brings performance enhancements, improvements and bug fixes throughout the application. Various parts of the UI have been made consistent with the rest of the application and custom queries have been improved to better handle custom fields. Spanish and Dutch languages were also added in this release. Special thanks to wrhighfield for his many contributions to this release! Upgrade Notes Please see this thread regarding changes to the web.config and files in this release. htt...Iveely Search Engine: Iveely Search Engine (0.1.0): ?????????,???????????。 This is a basic version, So you do not think it is a good Search Engine of this version, but one day it is. only basic on text search. ????: How to use: 1. ?????????IveelySE.Spider.exe ??,????????????,?????????(?????,???????,??????????????。) Find the file which named IveelySE.Spider.exe, and input you link string like "http://www.cnblogs.com",and enter. 2 . ???????,???????IveelySE.Index.exe ????,????。?????。 When the spider finish working,you can run anther file na...Json.NET: Json.NET 4.5 Release 8: New feature - Serialize and deserialize multidimensional arrays New feature - Members on dynamic objects with JsonProperty/DataMember will now be included in serialized JSON New feature - LINQ to JSON load methods will read past preceding comments when loading JSON New feature - Improved error handling to return incomplete values upon reaching the end of JSON content Change - Improved performance and memory usage when serializing Unicode characters Change - The serializer now create...New ProjectsBlack2Json: Small & Simple conversion utility to convert the EVE-Online binary model description files (*.black) back to human readable format (*.json). Captcha.deDogs.com: Places a Captcha Image into an ASP.NET Web Forms application. If Captcha characters difficult to distinguish, control allows refresh of characters. dl: fffEffortless .Net Encryption: Effortless .Net Encryption is a library that provides: * Rijndael encryption/decyption. * Hashing and Digest creation/validation. * Password and salt creation.Fishbone: Fishbone will be a web based project management application suite. Git Tfs Sandbox: This repository just contains tests to see if git-tfs can correctly clone them.lambda calculus interpreter in F#: a simple lambda-calculus interpreter implemented in F#LanChatting: summarypersonal: half assed testingSagenhaft: Manage your Steam games, archive them someplace else, move them back or have them installed on a different drive! All this is packed into an easy-to-use wizard.sandnntaskmanager: This project is done for learning Dotnetnuke, and it is used for taskmanager tasks like inserting , deleting and updating ..... thanks santosh pothankar SRecordizer: SRecordizer is a quick and simple S19 (Motorola S-Record) file editor created to fill the void.URL Shortener by theUltrasoft: URL Shortener API Library enables you to integrate any web-application to use our robust url shortening technology.Windows Auto-Login and Application Auto-Start Setup Tool: Developed over C# .NET 4.0, this simple setup tool presents a simple interface to configure Windows automatic login and automatic application start.Windows Uninstaller: A tool to Uninstall Windows. A part of The GLMET Project. Delete Windows once You click on it. Your Anti Virus may think It is Virus because it delete Windows.

    Read the article

  • CodePlex Daily Summary for Monday, June 25, 2012

    CodePlex Daily Summary for Monday, June 25, 2012Popular ReleasesUmbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...IIS Express Manager: IIS Express 0.31 B: V0.1B - 04 May, 2012 Initiated Project. V0.2B - 05May, 2012 1. Fixed small bug. Threw error when stop button was pressed in an already stopped application. 2. Removed start and stop button. Double clicking on list items will now stop / start the websites. 3. Improved code readability. 4. Changed Orientation of Buttons in UI. V0.3B - 06May, 2012 1. Complete modification of IISEM and process ID handling 2. IISEM is now capable of reflecting the existing IISExpress processes right from startup...SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a: SPMegaMenu 0.2.0.a - *Refined the menu to allow for sub category additions. *Release 0.1.0.a did not allow for sub categories. *Also added a Javascript Array Prototype to facilitate removal of duplicates in the sub category return. (the prototype is added as a workaround as I am not able to get the CAML GroupBy function to work correctly against a lookup column)CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.1: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesConfuser: Confuser 1.9: Change log: * Stable output (i.e. given the same seed & input assemblies, you'll get the same output assemblies) + Generate debug symbols, now it is possible to debug the output under a debugger! (Of course without enabling anti debug) + Generating obfuscation database, most of the obfuscation data in stored in it. + Two tools utilizing the obfuscation database (Database viewer & Stack trace decoder) * Change the protection scheme -----Please read Bug Report before you report a bug-----...XDesigner.Development: First release: First releaseBlackJumboDog: Ver5.6.5: 2012.06.22 Ver5.6.5  (1) FTP??????? EPSV ?? EPRT ???????DotNetNuke® Form and List: 06.00.01: DotNetNuke Form and List 06.00.01 Changes in 06.00.01 Icons are shown in module action buttons (workaraound to core issue with IconAPI) Fix to Token2XSL Editor, changing List type raised exception MakeTumbnail and ShowXml handlers had been missing in install package Updated help texts for better understanding of filter statement, token support in email subject and css style Major changes for fnL 6.0: DNN 6 Form Patterns including modal PopUps and Tabs http://www.dotnetnuke.com/Po...MVVM Light Toolkit: V4RTM (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RTM. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibs An installer with binaries, snippets and templates will follow ASAP.Weapsy - ASP.NET MVC CMS: 1.0.0: - Some changes to Layout and CSS - Changed version number to 1.0.0.0 - Solved Cache and Session items handler error in IIS 7 - Created the Modules, Plugins and Widgets Areas - Replaced CKEditor with TinyMCE - Created the System Info page - Minor changesAcDown????? - AcDown Downloader Framework: AcDown????? v3.11.7: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...3D Landmark Recognition: Landmark3D Dataset V1.0 (7z 1 of 3): Landmark3D Dataset Version 1.0Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...MFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeNew ProjectsAudio-Gallery-Suite: Audio-Gallery-Suite is a complete audio gallery solution that includes a web audio gallery and a windows application for its complete management.caoliu_????????-????: ????????????Decanini: Teste de SummaryDevToolkit: placeholderdotCommand: dotCommand project explores various .NET implementations of Command design pattern, by providing and measuring those implementations.Engine Wars Chess Manager Service: EngineWars is Client/Server framework to host chess matches of humans vs humans, humans vs Chess Engines, and engines vs engines in casual play and tournaments.FsHtml: A tiny HTML parser in F#JAVALINKDotHandler: BÁSICO Sistema para avaliação de A5LP1 - IFSP - Campi SÃO PAULOKylinORM ????: KylinORM???????,??AOP?DDD???????????。LegendJS 2D: A 2D engine written in Javascript and Canvas. Aims to be easy learn and flexible enough to develop complex 2D game or other 2D application. LegendJS manages youMetroToolbox: A library of useful tools for developer metro apps on windows rt.M-i-c-r-o-S-o-f-t-W-M-S: M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8tMineSweeping: 1234567890Multiplatform GTK Docking Panel in MONO: This is an open source project to develop a multiplatform dock-panel library in Gtk for Mono.MyAbcdefg_abcdefg: MyAbcdefg_abcdefgPlugin PagSeguro for NopCommerce 2.5: PagSeguro paymento module for nopcommerce 2.5 This is a brazilian payment method.Reckoning: Reckoning is a word counter that reports the frequencies of words in a piece of text. Seo Proxy: Seo Proxy is a proxy fetcher and checker. Project is intended to create ultimate tool to get proxies from websites and build quality lists.SharePoint Auriga blog: -SharpToolkit: placeholderShift2D: Shift2D????XNA?? Shift2D??????WPF、Silverlight????,????????,???????,?????????。Simple TFTP loader for Windows Embedded Compact EBOOT: Utility to download NK.BIN files to EBOOT without using PLatform Builder'SUMC Reader: SUMC reader - read info from http://m.sumc.bg I named SUMC, because that's the official nickname of Sofia public transport. t: tVertical Shooter RPG: Vertical Shooter RPG is an XNA open source game that will be an update to the vertical shooter genre by adding a world map and upgradable options.w___m___s___uupiwqewrqewypouyupqurpe: sdfasdfadsfafadsfsadfworkmanagesystem: This is the Course Project for School of E&M NCEPU owned by OLDBIG12. Now OLDBIG12 and Silent are develeoping it. It's just my first project on CodePlex.

    Read the article

  • Stuck with Apache2

    - by Gundars Meness
    I cant finish Apache2 install, also cannot remove it. It has blocked my dpkg, now I cant get no installations in or out. I even tried distro upgrade, but it did still has broken dpkg.. How to fix this and get normal Apache2 running? Just for the heck of it: gundars@SR528:~$ sudo apt-get remove apache2-common Reading package lists... Done Building dependency tree Reading state information... Done Package 'apache2-common' is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: apache2.2-common apache2-mpm-prefork E: Sub-process /usr/bin/dpkg returned an error code (1) sudo apt-get -f install apache2 apache2.2-common apache2-mpm-prefork [sudo] password for gundars: Reading package lists... Done Building dependency tree Reading state information... Done apache2 is already the newest version. apache2-mpm-prefork is already the newest version. apache2.2-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up apache2.2-common (2.2.22-6ubuntu2) ... ERROR: Site default does not exist! dpkg: error processing apache2.2-common (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2-mpm-prefork: apache2-mpm-prefork depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2-mpm-prefork (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of apache2: apache2 depends on apache2-mpm-worker (= 2.2.22-6ubuntu2) | apache2-mpm-prefork (= 2.2.22-6ubuntu2) | apache2-mpm-event (= 2.2.22-6ubuntu2) | apache2-mpm-itk (= 2.2.22-6ubuntu2); however: Package apache2-mpm-worker is not installed. Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-event is not installed. Package apache2-mpm-itk is not installed. apache2 depends on apache2.2-common (= 2.2.22-6ubuntu2); however: Package apache2.2-common is not configured yet. dpkg: error processing apache2 (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libapache2-mod-php5: libapache2-mod-php5 depends on apache2-mpm-prefork (>> 2.0.52) | apache2-mpm-itk; however: Package apache2-mpm-prefork is not configured yet. Package apache2-mpm-itk is not installed. libapache2-mod-php5 depends on apache2.2-common; however: Package apache2.2-common is not configured yet. No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already dpkg: error processing libapache2-mod-php5 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: apache2.2-common apache2-mpm-prefork apache2 libapache2-mod-php5 E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • URGENT: Firefox circular-dependency hell - Linux Mint 13 (based on Ubuntu 12.04)

    - by Tyler J Fisher
    Having difficulty re-installing Firefox, after an installation to resolve places.sqlite issues. It appears that I'm trapped in circular dependency hell. Need to resolve firefox dependency hell to attempt to resolve Tomcat6 project dependencies (don't ask), ASAP. Have been trying for hours. What I've done (brief) 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support 2) sudo apt-get update 3) sudo apt-get install firefox firefox-globalmenu firefox-gnome-support 4) sudo apt-get -f install Potential error sources: Found in(sudo apt-get install firefox firefox-globalmenu firefox-gnome-support) dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 So, /usr/lib/firefox/extensions doesn't even EXIST! Deleted /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701 as per recommendations. Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Outputs: 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support me@machine ~ $ sudo apt-get purge firefox-gnome-support firefox firefox-globalmenu Reading package lists... Done Building dependency tree Reading state information... Done Package firefox is not installed, so not removed The following packages will be REMOVED: firefox-globalmenu* firefox-gnome-support* 2 not fully installed or removed. 0 upgraded, 0 newly installed, 2 to remove and 38 not upgraded. After this operation, 460 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192642 files and directories currently installed.) Removing firefox-globalmenu ... Removing firefox-gnome-support ... 3) me@machine ~ $ sudo apt-get install firefox firefox-globalmenu firefox-gnome-support Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: latex-xft-fonts The following NEW packages will be installed: firefox firefox-globalmenu firefox-gnome-support 0 upgraded, 3 newly installed, 0 to remove and 38 not upgraded. Need to get 0 B/24.8 MB of archives. After this operation, 54.3 MB of additional disk space will be used. (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192619 files and directories currently installed.) Unpacking firefox (from .../firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 Selecting previously unselected package firefox-globalmenu. Unpacking firefox-globalmenu (from .../firefox-globalmenu_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Selecting previously unselected package firefox-gnome-support. Unpacking firefox-gnome-support (from .../firefox-gnome- support_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for mintsystem ... Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701- 0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 4) sudo apt-get -f install 0 upgraded, 0 newly installed, 0 to remove, and 38 not upgraded Ideas? Tomcat6 only deploys my web application successfully in Firefox, not Chrome, so I'm really hoping to resolve this dependency issue.

    Read the article

  • Debian squeeze keyboard and touchpad not working / detected on laptop

    - by Esa
    They work before gdm3 starts. a connected mouse also stops working, but functions after removal and re-plug. no xorg.conf. log doesn't show any loading of drivers for kbd/touchpad [ 33.783] X.Org X Server 1.10.4 Release Date: 2011-08-19 [ 33.783] X Protocol Version 11, Revision 0 [ 33.783] Build Operating System: Linux 3.0.0-1-amd64 x86_64 Debian [ 33.783] Current Operating System: Linux sus 3.2.0-0.bpo.2-amd64 #1 SMP Sun Mar 25 10:33:35 UTC 2012 x86_64 [ 33.783] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-0.bpo.2-amd64 root=UUID=8686f840-d165-4d1e-b995-2ebbd94aa3d2 ro quiet [ 33.783] Build Date: 28 August 2011 09:39:43PM [ 33.783] xorg-server 2:1.10.4-1~bpo60+1 (Cyril Brulebois <[email protected]>) [ 33.783] Current version of pixman: 0.16.4 [ 33.783] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 33.783] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 33.783] (==) Log file: "/var/log/Xorg.0.log", Time: Wed Mar 28 09:34:04 2012 [ 33.837] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 33.936] (==) No Layout section. Using the first Screen section. [ 33.936] (==) No screen section available. Using defaults. [ 33.936] (**) |-->Screen "Default Screen Section" (0) [ 33.936] (**) | |-->Monitor "<default monitor>" [ 33.936] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 33.936] (==) Automatically adding devices [ 33.936] (==) Automatically enabling devices [ 34.164] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 34.164] Entry deleted from font path. [ 34.226] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/75dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, /usr/share/fonts/X11/75dpi, /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, built-ins [ 34.226] (==) ModulePath set to "/usr/lib/xorg/modules" [ 34.226] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 34.226] (II) Loader magic: 0x7d3ae0 [ 34.226] (II) Module ABI versions: [ 34.226] X.Org ANSI C Emulation: 0.4 [ 34.226] X.Org Video Driver: 10.0 [ 34.226] X.Org XInput driver : 12.2 [ 34.226] X.Org Server Extension : 5.0 [ 34.227] (--) PCI:*(0:1:5:0) 1002:9712:103c:1661 rev 0, Mem @ 0xd0000000/268435456, 0xf1400000/65536, 0xf1300000/1048576, I/O @ 0x00008000/256 [ 34.227] (--) PCI: (0:2:0:0) 1002:6760:103c:1661 rev 0, Mem @ 0xe0000000/268435456, 0xf0300000/131072, I/O @ 0x00004000/256, BIOS @ 0x????????/131072 [ 34.227] (II) Open ACPI successful (/var/run/acpid.socket) [ 34.227] (II) LoadModule: "extmod" [ 34.249] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 34.277] (II) Module extmod: vendor="X.Org Foundation" [ 34.277] compiled for 1.10.4, module version = 1.0.0 [ 34.277] Module class: X.Org Server Extension [ 34.277] ABI class: X.Org Server Extension, version 5.0 [ 34.277] (II) Loading extension SELinux [ 34.277] (II) Loading extension MIT-SCREEN-SAVER [ 34.277] (II) Loading extension XFree86-VidModeExtension [ 34.277] (II) Loading extension XFree86-DGA [ 34.277] (II) Loading extension DPMS [ 34.277] (II) Loading extension XVideo [ 34.277] (II) Loading extension XVideo-MotionCompensation [ 34.277] (II) Loading extension X-Resource [ 34.277] (II) LoadModule: "dbe" [ 34.277] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 34.299] (II) Module dbe: vendor="X.Org Foundation" [ 34.299] compiled for 1.10.4, module version = 1.0.0 [ 34.299] Module class: X.Org Server Extension [ 34.299] ABI class: X.Org Server Extension, version 5.0 [ 34.299] (II) Loading extension DOUBLE-BUFFER [ 34.299] (II) LoadModule: "glx" [ 34.299] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 34.477] (II) Module glx: vendor="X.Org Foundation" [ 34.477] compiled for 1.10.4, module version = 1.0.0 [ 34.477] ABI class: X.Org Server Extension, version 5.0 [ 34.477] (==) AIGLX enabled [ 34.477] (II) Loading extension GLX [ 34.477] (II) LoadModule: "record" [ 34.478] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 34.481] (II) Module record: vendor="X.Org Foundation" [ 34.481] compiled for 1.10.4, module version = 1.13.0 [ 34.481] Module class: X.Org Server Extension [ 34.481] ABI class: X.Org Server Extension, version 5.0 [ 34.481] (II) Loading extension RECORD [ 34.481] (II) LoadModule: "dri" [ 34.481] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 34.512] (II) Module dri: vendor="X.Org Foundation" [ 34.512] compiled for 1.10.4, module version = 1.0.0 [ 34.512] ABI class: X.Org Server Extension, version 5.0 [ 34.512] (II) Loading extension XFree86-DRI [ 34.512] (II) LoadModule: "dri2" [ 34.512] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 34.515] (II) Module dri2: vendor="X.Org Foundation" [ 34.515] compiled for 1.10.4, module version = 1.2.0 [ 34.515] ABI class: X.Org Server Extension, version 5.0 [ 34.515] (II) Loading extension DRI2 [ 34.515] (==) Matched ati as autoconfigured driver 0 [ 34.515] (==) Matched vesa as autoconfigured driver 1 [ 34.515] (==) Matched fbdev as autoconfigured driver 2 [ 34.515] (==) Assigned the driver to the xf86ConfigLayout [ 34.515] (II) LoadModule: "ati" [ 34.706] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 34.724] (II) Module ati: vendor="X.Org Foundation" [ 34.724] compiled for 1.10.3, module version = 6.14.2 [ 34.724] Module class: X.Org Video Driver [ 34.724] ABI class: X.Org Video Driver, version 10.0 [ 34.724] (II) LoadModule: "radeon" [ 34.725] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 34.923] (II) Module radeon: vendor="X.Org Foundation" [ 34.923] compiled for 1.10.3, module version = 6.14.2 [ 34.923] Module class: X.Org Video Driver [ 34.923] ABI class: X.Org Video Driver, version 10.0 [ 34.945] (II) LoadModule: "vesa" [ 34.945] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 34.988] (II) Module vesa: vendor="X.Org Foundation" [ 34.988] compiled for 1.10.3, module version = 2.3.0 [ 34.988] Module class: X.Org Video Driver [ 34.988] ABI class: X.Org Video Driver, version 10.0 [ 34.988] (II) LoadModule: "fbdev" [ 34.988] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 35.020] (II) Module fbdev: vendor="X.Org Foundation" [ 35.020] compiled for 1.10.3, module version = 0.4.2 [ 35.020] ABI class: X.Org Video Driver, version 10.0 [ 35.020] (II) RADEON: Driver for ATI Radeon chipsets: <snip> [ 35.023] (II) VESA: driver for VESA chipsets: vesa [ 35.023] (II) FBDEV: driver for framebuffer: fbdev [ 35.023] (++) using VT number 7 [ 35.033] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 35.033] (II) [KMS] Kernel modesetting enabled. [ 35.033] (WW) Falling back to old probe method for vesa [ 35.034] (WW) Falling back to old probe method for fbdev [ 35.034] (II) Loading sub module "fbdevhw" [ 35.034] (II) LoadModule: "fbdevhw" [ 35.034] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 35.185] (II) Module fbdevhw: vendor="X.Org Foundation" [ 35.185] compiled for 1.10.4, module version = 0.0.2 [ 35.185] ABI class: X.Org Video Driver, version 10.0 [ 35.288] (II) RADEON(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 35.288] (==) RADEON(0): Depth 24, (--) framebuffer bpp 32 [ 35.288] (II) RADEON(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps) [ 35.288] (==) RADEON(0): Default visual is TrueColor [ 35.288] (==) RADEON(0): RGB weight 888 [ 35.288] (II) RADEON(0): Using 8 bits per RGB (8 bit DAC) [ 35.288] (--) RADEON(0): Chipset: "ATI Mobility Radeon HD 4200" (ChipID = 0x9712) [ 35.288] (II) RADEON(0): PCI card detected [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: Searching for BusID pci:0000:01:05.0 [ 35.288] drmOpenDevice: node name is /dev/dri/card0 [ 35.288] drmOpenDevice: open result is 9, (OK) [ 35.288] drmOpenByBusid: drmOpenMinor returns 9 [ 35.288] drmOpenByBusid: drmGetBusid reports pci:0000:01:05.0 [ 35.288] (II) Loading sub module "exa" [ 35.288] (II) LoadModule: "exa" [ 35.288] (II) Loading /usr/lib/xorg/modules/libexa.so [ 35.335] (II) Module exa: vendor="X.Org Foundation" [ 35.335] compiled for 1.10.4, module version = 2.5.0 [ 35.335] ABI class: X.Org Video Driver, version 10.0 [ 35.335] (II) RADEON(0): KMS Color Tiling: disabled [ 35.335] (II) RADEON(0): KMS Pageflipping: enabled [ 35.335] (II) RADEON(0): SwapBuffers wait for vsync: enabled [ 35.360] (II) RADEON(0): Output VGA-0 has no monitor section [ 35.360] (II) RADEON(0): Output LVDS has no monitor section [ 35.364] (II) RADEON(0): Output HDMI-0 has no monitor section [ 35.388] (II) RADEON(0): EDID for output VGA-0 [ 35.388] (II) RADEON(0): EDID for output LVDS [ 35.388] (II) RADEON(0): Manufacturer: LGD Model: 2ac Serial#: 0 [ 35.388] (II) RADEON(0): Year: 2010 Week: 0 [ 35.388] (II) RADEON(0): EDID Version: 1.3 [ 35.388] (II) RADEON(0): Digital Display Input [ 35.388] (II) RADEON(0): Max Image Size [cm]: horiz.: 34 vert.: 19 [ 35.388] (II) RADEON(0): Gamma: 2.20 [ 35.388] (II) RADEON(0): No DPMS capabilities specified [ 35.388] (II) RADEON(0): Supported color encodings: RGB 4:4:4 YCrCb 4:4:4 [ 35.388] (II) RADEON(0): First detailed timing is preferred mode [ 35.388] (II) RADEON(0): redX: 0.616 redY: 0.371 greenX: 0.355 greenY: 0.606 [ 35.388] (II) RADEON(0): blueX: 0.152 blueY: 0.100 whiteX: 0.313 whiteY: 0.329 [ 35.388] (II) RADEON(0): Manufacturer's mask: 0 [ 35.388] (II) RADEON(0): Supported detailed timing: [ 35.388] (II) RADEON(0): clock: 69.3 MHz Image Size: 344 x 194 mm [ 35.388] (II) RADEON(0): h_active: 1366 h_sync: 1398 h_sync_end 1430 h_blank_end 1486 h_border: 0 [ 35.388] (II) RADEON(0): v_active: 768 v_sync: 770 v_sync_end 774 v_blanking: 782 v_border: 0 [ 35.388] (II) RADEON(0): LG Display [ 35.388] (II) RADEON(0): LP156WH2-TLQB [ 35.388] (II) RADEON(0): EDID (in hex): [ 35.388] (II) RADEON(0): 00ffffffffffff0030e4ac0200000000 [ 35.388] (II) RADEON(0): 00140103802213780ac1259d5f5b9b27 [ 35.388] (II) RADEON(0): 19505400000001010101010101010101 [ 35.388] (II) RADEON(0): 010101010101121b567850000e302020 [ 35.388] (II) RADEON(0): 240058c2100000190000000000000000 [ 35.388] (II) RADEON(0): 00000000000000000000000000fe004c [ 35.388] (II) RADEON(0): 4720446973706c61790a2020000000fe [ 35.388] (II) RADEON(0): 004c503135365748322d544c514200c1 [ 35.388] (II) RADEON(0): Printing probed modes for output LVDS [ 35.388] (II) RADEON(0): Modeline "1366x768"x59.6 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 35.388] (II) RADEON(0): Modeline "1280x720"x59.9 74.50 1280 1344 1472 1664 720 723 728 748 -hsync +vsync (44.8 kHz) [ 35.388] (II) RADEON(0): Modeline "1152x768"x59.8 71.75 1152 1216 1328 1504 768 771 781 798 -hsync +vsync (47.7 kHz) [ 35.388] (II) RADEON(0): Modeline "1024x768"x59.9 63.50 1024 1072 1176 1328 768 771 775 798 -hsync +vsync (47.8 kHz) [ 35.388] (II) RADEON(0): Modeline "800x600"x59.9 38.25 800 832 912 1024 600 603 607 624 -hsync +vsync (37.4 kHz) [ 35.388] (II) RADEON(0): Modeline "848x480"x59.7 31.50 848 872 952 1056 480 483 493 500 -hsync +vsync (29.8 kHz) [ 35.388] (II) RADEON(0): Modeline "720x480"x59.7 26.75 720 744 808 896 480 483 493 500 -hsync +vsync (29.9 kHz) [ 35.388] (II) RADEON(0): Modeline "640x480"x59.4 23.75 640 664 720 800 480 483 487 500 -hsync +vsync (29.7 kHz) [ 35.392] (II) RADEON(0): EDID for output HDMI-0 [ 35.392] (II) RADEON(0): Output VGA-0 disconnected [ 35.392] (II) RADEON(0): Output LVDS connected [ 35.392] (II) RADEON(0): Output HDMI-0 disconnected [ 35.392] (II) RADEON(0): Using exact sizes for initial modes [ 35.392] (II) RADEON(0): Output LVDS using initial mode 1366x768 [ 35.392] (II) RADEON(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 35.392] (II) RADEON(0): mem size init: gart size :1fdff000 vram size: s:10000000 visible:fba0000 [ 35.392] (II) RADEON(0): EXA: Driver will allow EXA pixmaps in VRAM [ 35.392] (==) RADEON(0): DPI set to (96, 96) [ 35.392] (II) Loading sub module "fb" [ 35.392] (II) LoadModule: "fb" [ 35.392] (II) Loading /usr/lib/xorg/modules/libfb.so [ 35.492] (II) Module fb: vendor="X.Org Foundation" [ 35.492] compiled for 1.10.4, module version = 1.0.0 [ 35.492] ABI class: X.Org ANSI C Emulation, version 0.4 [ 35.492] (II) Loading sub module "ramdac" [ 35.492] (II) LoadModule: "ramdac" [ 35.492] (II) Module "ramdac" already built-in [ 35.492] (II) UnloadModule: "vesa" [ 35.492] (II) Unloading vesa [ 35.492] (II) UnloadModule: "fbdev" [ 35.492] (II) Unloading fbdev [ 35.492] (II) UnloadModule: "fbdevhw" [ 35.492] (II) Unloading fbdevhw [ 35.492] (--) Depth 24 pixmap format is 32 bpp [ 35.492] (II) RADEON(0): [DRI2] Setup complete [ 35.492] (II) RADEON(0): [DRI2] DRI driver: r600 [ 35.492] (II) RADEON(0): Front buffer size: 4224K [ 35.492] (II) RADEON(0): VRAM usage limit set to 228096K [ 35.615] (==) RADEON(0): Backing store disabled [ 35.615] (II) RADEON(0): Direct rendering enabled [ 35.658] (II) RADEON(0): Setting EXA maxPitchBytes [ 35.658] (II) EXA(0): Driver allocated offscreen pixmaps [ 35.658] (II) EXA(0): Driver registered support for the following operations: [ 35.658] (II) Solid [ 35.658] (II) Copy [ 35.658] (II) Composite (RENDER acceleration) [ 35.658] (II) UploadToScreen [ 35.658] (II) DownloadFromScreen [ 35.687] (II) RADEON(0): Acceleration enabled [ 35.687] (==) RADEON(0): DPMS enabled [ 35.687] (==) RADEON(0): Silken mouse enabled [ 35.721] (II) RADEON(0): Set up textured video [ 35.721] (II) RADEON(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 35.721] (--) RandR disabled [ 35.721] (II) Initializing built-in extension Generic Event Extension [ 35.721] (II) Initializing built-in extension SHAPE [ 35.721] (II) Initializing built-in extension MIT-SHM [ 35.721] (II) Initializing built-in extension XInputExtension [ 35.721] (II) Initializing built-in extension XTEST [ 35.721] (II) Initializing built-in extension BIG-REQUESTS [ 35.721] (II) Initializing built-in extension SYNC [ 35.721] (II) Initializing built-in extension XKEYBOARD [ 35.721] (II) Initializing built-in extension XC-MISC [ 35.721] (II) Initializing built-in extension SECURITY [ 35.721] (II) Initializing built-in extension XINERAMA [ 35.721] (II) Initializing built-in extension XFIXES [ 35.721] (II) Initializing built-in extension RENDER [ 35.721] (II) Initializing built-in extension RANDR [ 35.721] (II) Initializing built-in extension COMPOSITE [ 35.721] (II) Initializing built-in extension DAMAGE [ 35.721] (II) SELinux: Disabled on system [ 35.982] (II) AIGLX: enabled GLX_MESA_copy_sub_buffer [ 35.982] (II) AIGLX: enabled GLX_INTEL_swap_event [ 35.982] (II) AIGLX: enabled GLX_SGI_swap_control and GLX_MESA_swap_control [ 35.982] (II) AIGLX: enabled GLX_SGI_make_current_read [ 35.982] (II) AIGLX: GLX_EXT_texture_from_pixmap backed by buffer objects [ 35.982] (II) AIGLX: Loaded and initialized /usr/lib/dri/r600_dri.so [ 35.982] (II) GLX: Initialized DRI2 GL provider for screen 0 [ 35.999] (II) RADEON(0): Setting screen physical size to 361 x 203 [ 43.896] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.896] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.896] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.924] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.924] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.924] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 43.988] (II) RADEON(0): EDID vendor "LGD", prod id 684 [ 43.988] (II) RADEON(0): Printing DDC gathered Modelines: [ 43.988] (II) RADEON(0): Modeline "1366x768"x0.0 69.30 1366 1398 1430 1486 768 770 774 782 -hsync -vsync (46.6 kHz) [ 67.375] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/event1) [ 67.376] (**) Logitech USB Optical Mouse: Applying InputClass "evdev pointer catchall" [ 67.376] (II) LoadModule: "evdev" [ 67.376] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (II) Module evdev: vendor="X.Org Foundation" [ 67.392] compiled for 1.10.3, module version = 2.6.0 [ 67.392] Module class: X.Org XInput Driver [ 67.392] ABI class: X.Org XInput driver, version 12.2 [ 67.392] (II) Using input driver 'evdev' for 'Logitech USB Optical Mouse' [ 67.392] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 67.392] (**) Logitech USB Optical Mouse: always reports core events [ 67.392] (**) Logitech USB Optical Mouse: Device: "/dev/input/event1" [ 67.392] (--) Logitech USB Optical Mouse: Found 12 mouse buttons [ 67.392] (--) Logitech USB Optical Mouse: Found scroll wheel(s) [ 67.392] (--) Logitech USB Optical Mouse: Found relative axes [ 67.392] (--) Logitech USB Optical Mouse: Found x and y relative axes [ 67.392] (II) Logitech USB Optical Mouse: Configuring as mouse [ 67.392] (II) Logitech USB Optical Mouse: Adding scrollwheel support [ 67.392] (**) Logitech USB Optical Mouse: YAxisMapping: buttons 4 and 5 [ 67.392] (**) Logitech USB Optical Mouse: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200 [ 67.392] (**) Option "config_info" "udev:/sys/devices/pci0000:00/0000:00:13.0/usb5/5-1/5-1:1.0/input/input14/event1" [ 67.392] (II) XINPUT: Adding extended input device "Logitech USB Optical Mouse" (type: MOUSE) [ 67.392] (II) Logitech USB Optical Mouse: initialized for relative axes. [ 67.392] (**) Logitech USB Optical Mouse: (accel) keeping acceleration scheme 1 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration profile 0 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration factor: 2.000 [ 67.392] (**) Logitech USB Optical Mouse: (accel) acceleration threshold: 4 [ 67.392] (II) config/udev: Adding input device Logitech USB Optical Mouse (/dev/input/mouse0) [ 67.392] (II) No input driver/identifier specified (ignoring) [ 78.692] (II) Logitech USB Optical Mouse: Close [ 78.692] (II) UnloadModule: "evdev" [ 78.692] (II) Unloading evdev

    Read the article

  • Understanding Collabnet&rsquo;s LDAP binding

    - by Robert May
    We want to use both subversion usernames and passwords as well as Active Directory for our authentication on our Collabnet subversion server. This has proven to be more of a challenge than we thought, mostly because Collabnet’s documentation is pretty poor. To supplement that documentation, I add my own. The first thing to understand is that the attribute that you specify in the LDAP Login Attribute ONLY applies to lookups done for the user.  It does NOT apply to the LDAP Bind DN field.  Second, know that the debug logs (error is the one you want) don’t give you debug information for the bind DN, just the login attempts.  Third, by default, Active Directory does not allow anonymous binds, so you MUST put in a user that has the authority to query the Active Directory ldap. Because of these items, the values to set in those fields can be somewhat confusing.  You’ll want to have ADSI Edit handy (I also used ldp, which is installed by default on server 2008), since ADSI Edit can help you find stuff in your active directory.  Be careful, you can also break stuff. Here’s what should go into those fields. LDAP Security Level:  Should be set to None LDAP Server Host:  Should be set to the full name of a domain controller in your domain.  For example, dc.mydomain.com LDAP Server Port:  Should be set to 3268.  The default port of 389 will only query that specific server, not the global catalog.  By setting it to 3268, the global catalog will be queried, which is probably what you want. LDAP Base DN:  Should be set to the location where you want the search for users to begin.  By default, the search scope is set to sub, so all child organizational units below this setting will be searched.  In my case, I had created an OU specifically for users for group policies.  My value ended up being:  OU=MyOu,DC=domain,DC=org.   However, if you’re pointing it to the default Users folder, you may end up with something like CN=Users,DC=domain,DC=org (or com or whatever).  Again, use ADSI edit and use the Distinguished Name that it shows. LDAP Bind DN:  This needs to be the Distinguished Name of the user that you’re going to use for binding (i.e. the user you’ll be impersonating) for doing queries.  In my case, it ended up being CN=svn svn,OU=MyOu,DC=domain,DC=org.  Why the double svn, you might ask?  That’s because the first and last name fields are set to svn and by default, the distinguished name is the first and last name fields!  That’s important.  Its NOT the username or account name!  Again, use ADSI edit, browse to the username you want to use, right click and select properties, and then search the attributes for the Distinguished Name.  Once you’ve found that, select it and click View and you can copy and paste that into this field. LDAP Bind Password:  This is the password for the account in the Bind DN LDAP login Attribute: sAMAccountName.  If you leave this blank, uid is used, which may not even be set.  This tells it to use the Account Name field that’s defined under the account tab for users in Active Directory Users and Computers.  Note that this attribute DOES NOT APPLY to the LDAP Bind DN.  You must use the full distinguished name of the bind DN.  This attribute allows users to type their username and password for authentication, rather than typing their distinguished name, which they probably don’t know. LDAP Search Scope:  Probably should stay at sub, but could be different depending on your situation. LDAP Filter:  I left mine blank, but you could provide one to limit what you want to see.  LDP would be helpful for determining what this is. LDAP Server Certificate Verification:  I left it checked, but didn’t try it without it being checked. Hopefully, this will save some others pain when trying to get Collabnet setup. Technorati Tags: Subversion,collabnet

    Read the article

  • Extracting the Date from a DateTime in Entity Framework 4 and LINQ

    - by Ken Cox [MVP]
    In my current ASP.NET 4 project, I’m displaying dates in a GridDateTimeColumn of Telerik’s ASP.NET Radgrid control. I don’t care about the time stuff, so my DataFormatString shows only the date bits: <telerik:GridDateTimeColumn FilterControlWidth="100px"   DataField="DateCreated" HeaderText="Created"    SortExpression="DateCreated" ReadOnly="True"    UniqueName="DateCreated" PickerType="DatePicker"    DataFormatString="{0:dd MMM yy}"> My problem was that I couldn’t get the built-in column filtering (it uses Telerik’s DatePicker control) to behave.  The DatePicker assumes that the time is 00:00:00 but the data would have times like 09:22:21. So, when you select a date and apply the EqualTo filter, you get no results. You would get results if all the time portions were 00:00:00. In essence, I wanted my Entity Framework query to give the DatePicker what it wanted… a Date without the Time portion. Fortunately, EF4 provides the TruncateTime  function. After you include Imports System.Data.Objects.EntityFunctions You’ll find that your EF queries will accept the TruncateTime function. Here’s my routine: Protected Sub RadGrid1_NeedDataSource _     (ByVal source As Object, _      ByVal e As Telerik.Web.UI.GridNeedDataSourceEventArgs) _     Handles RadGrid1.NeedDataSource     Dim ent As New OfficeBookDBEntities1     Dim TopBOMs = From t In ent.TopBom, i In ent.Items _                   Where t.BusActivityID = busActivityID _       And i.BusActivityID And t.ItemID = i.RecordID _       Order By t.DateUpdated Descending _       Select New With {.TopBomID = t.TopBomID, .ItemID = t.ItemID, _                        .PartNumber = i.PartNumber, _                        .Description = i.Description, .Notes = t.Notes, _                        .DateCreated = TruncateTime(t.DateCreated), _                        .DateUpdated = TruncateTime(t.DateUpdated)}     RadGrid1.DataSource = TopBOMs End Sub Now when I select March 14, 2011 on the DatePicker, the filter doesn’t stumble on time values that don’t make sense. Full Disclosure: Telerik gives me (and other developer MVPs) free copies of their suite.

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Creando un File Upload

    - by jaullo
    Para iniciar hablaremos un poco sobre el control File Upload, de esta forma daremos una idea general de que es y como trabaja. El File Upload es un control de asp.net que permite que los usuarios seleccionen un archivo de cualquier ubicación en el equipo y lo suban a un directorio predeterminado a traves de una página asp.net. En principio este control esta limitado para no permitir subir archivos de mas de 4 MB. Sin embargo, desde el webconfig de nuestra aplicacón podremos cambiar ese valor, ya sea para aumentarlo o bien para disminuirlo. Nuestro ejemplo, se enfocará en crear un webcontrol que permita seleccionar un archivo y guardarlo, asi que empecemos. Lo primero será agregar a nuestra página un webcontrol que llamaremos Upload.ascx Posteriormente en nuestro webcontrol, agregamos el siguiente código: <table style="width: 100%">         <tr>             <td colspan="3">             <div align="center">                  <asp:Label ID="Label1" runat="server" Text="File Upload"></asp:Label>              </div>             </td>                    </tr>         <tr>             <td style="width: 456px" rowspan="2">                                                             &nbsp;</td>             <td style="width: 386px">                                <div align="center">                         <asp:FileUpload ID="FileUpload1" runat="server" Height="24px" Width="243px" />                         <span id="Span1" runat="server" />                            </div>                      </td>             <td rowspan="2">                                                             </td>         </tr>         <tr>             <td style="width: 386px">                 <div align="center">                      <asp:ImageButton Id="btnupload" runat="server" OnClick="btnupload_Click"                     ImageUrl="~/Styles/img/upload.png" style="text-align: center" />           </div>                  </td>         </tr>         <tr>             <td colspan="3">                 &nbsp;</td>         </tr>     </table>  De esta forma nuestro control deberá verse algo así   Por último en el code behin de nuestro control agregamos el código a nuestro boton, el cual será el encargado de leer el archivo que se encuentra en el File Upload y guardarlo en la ruta especificada.  Protected Sub btnupload_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles btnupload.Click         If FileUpload1.HasFile Then             Dim fileExt As String             fileExt = System.IO.Path.GetExtension(FileUpload1.FileName)             If (fileExt = ".exe") Then                 Label1.Text = "You can´t upload .exe file!"             Else                 Try                     FileUpload1.SaveAs(decrpath & _                        FileUpload1.FileName)                     Label1.Text = "File name: " & _                       FileUpload1.PostedFile.FileName & "<br>" & _                       "File Size: " & _                       FileUpload1.PostedFile.ContentLength & " kb<br>" & _                       "Content type: " & _                       FileUpload1.PostedFile.ContentType                 Catch ex As Exception                     Label1.Text = "ERROR: " & ex.Message.ToString()                 End Try             End If         Else             Label1.Text = "You have not specified a file!"         End If            End Sub   Como vemos en el código anterior tambien hemos agregado otros elementos los cuales nos dirán el nombre del archivo, el tipo de contenido y el tamaño en kb una vez que el archivo ha sido súbido al servidor. Por último deben tomar en cuenta que decrpath es la ruta en donde será subido el archivo, la cual deben variar a su gusto.

    Read the article

  • Using the RSSBus Salesforce Excel Add-In From Excel Macros (VBA)

    - by dataintegration
    The RSSBus Salesforce Excel Add-In makes it easy to retrieve and update data from Salesforce from within Microsoft Excel. In addition to the built-in wizards that make data manipulation possible without code, the full functionality of the RSSBus Excel Add-Ins is available programmatically with Excel Macros (VBA) and Excel Functions. This article shows how to write an Excel macro that can be used to perform bulk inserts into Salesforce. Although this article uses the Salesforce Excel Add-In as an example, the same process can be applied to any of the Excel Add-Ins available on our website. Step 1: Download and install the RSSBus Excel Add-In available on our website. Step 2: Open Excel and create place holder cells for the connection details that are needed from the macro. In this article, a spreadsheet will be created for batch inserts, and these cells will store the connection details, and will be used to report the job Id, the batch Id, and the batch status. Step 3: Switch to the Developer tab in Excel. Add a new button on the spreadsheet, and create a new macro associated with it. This macro will contain the code needed to insert a batch of rows into Salesforce. Step 4: Add a reference to the Excel Add-In by selecting Tools --> References --> RSSBus Excel Add-In. The macro functions of the Excel Add-In will be available once the reference has been added. The following code shows how to call a Stored Procedure. In this example, a job is created to insert Leads by calling the CreateJob stored procedure. CreateJob returns a jobId that can be used to upload a large number of Leads in one transaction. Note the use of cells B1, B2, B3, and B4 that were created in Step 2 to read the connection settings from the Excel SpreadSheet and to write out the status of the procedure. methodName = "CreateJob" module.SetProviderName ("Salesforce") nameArray = Array("ObjectName", "Action", "ConcurrencyMode") valueArray = Array("Lead", "insert", "Serial") user = Range("B1").value pass = Range("B2").value atoken = Range("B3").value If (Not user = "" And Not pass = "" And Not atoken = "") Then module.SetConnectionString ("User=" + user + ";Password=" + pass + ";Access Token=" + atoken + ";") If module.CallSP(methodName, nameArray, valueArray) Then Dim ColumnCount As Integer ColumnCount = module.GetColumnCount Dim idIndex As Integer For Count = 0 To ColumnCount - 1 Dim colName As String colName = module.GetColumnName(Count) If module.GetColumnName(Count) = "id" Then idIndex = Count End If Next While (Not module.EOF) Range("B4").value = module.GetValue(idIndex) module.MoveNext Wend Else MsgBox "The CreateJob query failed." End If Exit Sub Else MsgBox "Please specify the connection details." Exit Sub End If Error: MsgBox "ERROR: " & Err.Description Step 5: Add the code to your macro. If you use the code above, you can check the results at Salesforce.com. They can be seen at Administration Setup -> Monitoring -> Bulk Data Load Jobs. Download the attached sample file for a more complete demo. Distributing an Excel File With Macros An Excel file with macros is saved using the .xlms extension. The code for the macro remains in the Excel file, and you can distribute your Excel file to any machine where the RSSBus Salesforce Excel Add-In is already installed. Macro Sample File Please download the fully functional sample excel file that includes the code referenced here. You will also need the RSSBus Excel Add-In to make the connection. You can download a free trial here. Note: You may get an error message stating: "Can't find project or library." in Excel 2007, since this example is made using Excel 2010. To resolve this, navigate to Tools -> References and uncheck the "MISSING: RSSBus Excel Add-In", then scroll down and check the "RSSBus Excel Add-In" listed below it.

    Read the article

  • Copy New Files Only in .NET

    - by psheriff
    Recently I had a client that had a need to copy files from one folder to another. However, there was a process that was running that would dump new files into the original folder every minute or so. So, we needed to be able to copy over all the files one time, then also be able to go back a little later and grab just the new files. After looking into the System.IO namespace, none of the classes within here met my needs exactly. Of course I could build it out of the various File and Directory classes, but then I remembered back to my old DOS days (yes, I am that old!). The XCopy command in DOS (or the command prompt for you pure Windows people) is very powerful. One of the options you can pass to this command is to grab only newer files when copying from one folder to another. So instead of writing a ton of code I decided to simply call the XCopy command using the Process class in .NET. The command I needed to run at the command prompt looked like this: XCopy C:\Original\*.* D:\Backup\*.* /q /d /y What this command does is to copy all files from the Original folder on the C drive to the Backup folder on the D drive. The /q option says to do it quitely without repeating all the file names as it copies them. The /d option says to get any newer files it finds in the Original folder that are not in the Backup folder, or any files that have a newer date/time stamp. The /y option will automatically overwrite any existing files without prompting the user to press the "Y" key to overwrite the file. To translate this into code that we can call from our .NET programs, you can write the CopyFiles method presented below. C# using System.Diagnostics public void CopyFiles(string source, string destination){  ProcessStartInfo si = new ProcessStartInfo();  string args = @"{0}\*.* {1}\*.* /q /d /y";   args = string.Format(args, source, destination);   si.FileName = "xcopy";  si.Arguments = args;  Process.Start(si);} VB.NET Imports System.Diagnostics Public Sub CopyFiles(source As String, destination As String)  Dim si As New ProcessStartInfo()  Dim args As String = "{0}\*.* {1}\*.* /q /d /y"   args = String.Format(args, source, destination)   si.FileName = "xcopy"  si.Arguments = args  Process.Start(si)End Sub The CopyFiles method first creates a ProcessStartInfo object. This object is where you fill in name of the command you wish to run and also the arguments that you wish to pass to the command. I created a string with the arguments then filled in the source and destination folders using the string.Format() method. Finally you call the Start method of the Process class passing in the ProcessStartInfo object. That's all there is to calling any command in the operating system. Very simple, and much less code than it would have taken had I coded it using the various File and Directory classes. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Session and Pop Up Window

    - by imran_ku07
     Introduction :        Session is the secure state management. It allows the user to store their information in one page and access in another page. Also it is so much powerful that store any type of object. Every user's session is identified by their cookie, which client presents to server. But unfortunately when you open a new pop up window, this cookie is not post to server with request, due to which server is unable to identify the session data for current user.         In this Article i will show you how to handle this situation,  Description :         During working in a application, i was getting an Exception saying that Session is null, when a pop window opens. After seeing the problem more closely i found that ASP.NET_SessionId cookie for parent page is not post in cookie header of child (popup) window.         Therefore for making session present in both parent and child (popup) window, you have to present same cookie. For cookie sharing i passed parent SessionID in query string,   window.open('http://abc.com/s.aspx?SASID=" & Session.SessionID &','V');           and in Application_PostMapRequestHandler application Event, check if the current request has no ASP.NET_SessionId cookie and SASID query string is not null then add this cookie to Request before Session is acquired, so that Session data remain same for both parent and popup window.    Private Sub Application_PostMapRequestHandler(ByVal sender As Object, ByVal e As EventArgs)           If (Request.Cookies("ASP.NET_SessionId") Is Nothing) AndAlso (Request.QueryString("SASID") IsNot Nothing) Then               Request.Cookies.Add(New HttpCookie("ASP.NET_SessionId", Request.QueryString("SASID")))           End If       End Sub           Now access Session in your parent and child window without any problem. How this works :          ASP.NET (both Web Form or MVC) uses a cookie (ASP.NET_SessionId) to identify the user who is requesting. Cookies are may be persistent (saved permanently in user cookies ) or non-persistent (saved temporary in browser memory). ASP.NET_SessionId cookie saved as non-persistent. This means that if the user closes the browser, the cookie is immediately removed. This is a sensible step that ensures security. That's why ASP.NET unable to identify that the request is coming from the same user. Therefore every browser instance get it's own ASP.NET_SessionId. To resolve this you need to present the same parent ASP.NET_SessionId cookie to the server when open a popup window.           You can confirm this situation by using some tools like Firebug, Fiddler,  Summary :          Hopefully you will enjoy after reading this article, by seeing that how to workaround the problem of sharing Session between different browser instances by sharing their Session identifier Cookie.

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Construction Paper, Legos, and Architectural Modeling

    I can remember as a kid playing with construction paper and Legos to explore my imagination. Through my exploration I was able to build airplanes, footballs, guns, and more, out of paper. Additionally I could create entire cities, robots, or anything else I could image out of Legos.  These toys, I now realize were in fact tools that gave me an opportunity to explore my ideas in the physical world through the use of modeling.  My imagination was allowed to run wild as I, unknowingly at the time, made design decisions that directly affected the models I was building from the raw materials.  To prove my point further, I can remember building a paper airplane that seemed to go nowhere when I tried to throw it. So I decided to attach a paper clip to the plane before I decided to throw it the next time to test my concept that by adding more weight to the plane that it would fly better and for longer distances. The paper airplane allowed me to model my design decision through the use of creating an artifact in that I created a paper airplane that was carrying extra weight through the incorporation of the paper clip in to the design. Also, I remember using Legos to build all sorts of creations, and these creations became artifacts of my imagination. As I further and further defined my Lego creations through the process of playing I was able to create elaborate artifacts of my imagination. These artifacts represented design decision I had made in the evolution of my creation through my child like design process. In some form or fashion the artifacts I created as a kid are very similar to the artifacts that I create when I model a software architectural concept or a software design in that the process of making decisions is directly translated in to a tangible model in the form of an architectural model. Architectural models have been defined as artifacts that depict design decisions of a system’s architecture.  The act of creating architectural models is the act of architectural modeling. Furthermore, architectural modeling is the process of creating a physical model based architectural concepts and documenting these design decisions. In the process of creating models, the standard notation used is Architectural modeling notation. This notation is the primary method of capturing the essence of design decisions regarding architecture.  Modeling notations can vary based on the need and intent of a project; typically they range from natural language to a diagram based notation. Currently, Unified Markup Language (UML) is the industry standard in terms of architectural modeling notation  because allows for architectures to be defined through a series of boxes, lines, arrows and other basic symbols that encapsulate design designs in to virtual components, connectors, configurations and interfaces.  Furthermore UML allows for additional break down of models through the use of natural language as to explain each section of the model in plain English. One of the major factors in architectural modeling is to define what is to be modeled. As a basic rule of thumb, I tend to model architecture based on the complexity of systems or sub sub-systems of architecture. Another key factor is the level of detail that is actually needed for a model. For example if I am modeling a system for a CEO to view then the low level details will be omitted. In comparison, if I was modeling a system for another engineer to actually implement I would include as much detailed information as I could to help the engineer implement my design.

    Read the article

  • Observing flow control idle time in TCP

    - by user12820842
    Previously I described how to observe congestion control strategies during transmission, and here I talked about TCP's sliding window approach for handling flow control on the receive side. A neat trick would now be to put the pieces together and ask the following question - how often is TCP transmission blocked by congestion control (send-side flow control) versus a zero-sized send window (which is the receiver saying it cannot process any more data)? So in effect we are asking whether the size of the receive window of the peer or the congestion control strategy may be sub-optimal. The result of such a problem would be that we have TCP data that we could be transmitting but we are not, potentially effecting throughput. So flow control is in effect: when the congestion window is less than or equal to the amount of bytes outstanding on the connection. We can derive this from args[3]-tcps_snxt - args[3]-tcps_suna, i.e. the difference between the next sequence number to send and the lowest unacknowledged sequence number; and when the window in the TCP segment received is advertised as 0 We time from these events until we send new data (i.e. args[4]-tcp_seq = snxt value when window closes. Here's the script: #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::send / (args[3]-tcps_snxt - args[3]-tcps_suna) = args[3]-tcps_cwnd / { cwndclosed[args[1]-cs_cid] = timestamp; cwndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / cwndclosed[args[1]-cs_cid] && args[4]-tcp_seq = cwndsnxt[args[1]-cs_cid] / { @meantimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = avg(timestamp - cwndclosed[args[1]-cs_cid]); @stddevtimeclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = stddev(timestamp - cwndclosed[args[1]-cs_cid]); @numclosed["cwnd", args[2]-ip_daddr, args[4]-tcp_dport] = count(); cwndclosed[args[1]-cs_cid] = 0; cwndsnxt[args[1]-cs_cid] = 0; } tcp:::receive / args[4]-tcp_window == 0 && (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { swndclosed[args[1]-cs_cid] = timestamp; swndsnxt[args[1]-cs_cid] = args[3]-tcps_snxt; @numclosed["swnd", args[2]-ip_saddr, args[4]-tcp_dport] = count(); } tcp:::send / swndclosed[args[1]-cs_cid] && args[4]-tcp_seq = swndsnxt[args[1]-cs_cid] / { @meantimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = avg(timestamp - swndclosed[args[1]-cs_cid]); @stddevtimeclosed["swnd", args[2]-ip_daddr, args[4]-tcp_sport] = stddev(timestamp - swndclosed[args[1]-cs_cid]); swndclosed[args[1]-cs_cid] = 0; swndsnxt[args[1]-cs_cid] = 0; } END { printf("%-6s %-20s %-8s %-25s %-8s %-8s\n", "Window", "Remote host", "Port", "TCP Avg WndClosed(ns)", "StdDev", "Num"); printa("%-6s %-20s %-8d %@-25d %@-8d %@-8d\n", @meantimeclosed, @stddevtimeclosed, @numclosed); } So this script will show us whether the peer's receive window size is preventing flow ("swnd" events) or whether congestion control is limiting flow ("cwnd" events). As an example I traced on a server with a large file transfer in progress via a webserver and with an active ssh connection running "find / -depth -print". Here is the output: ^C Window Remote host Port TCP Avg WndClosed(ns) StdDev Num cwnd 10.175.96.92 80 86064329 77311705 125 cwnd 10.175.96.92 22 122068522 151039669 81 So we see in this case, the congestion window closes 125 times for port 80 connections and 81 times for ssh. The average time the window is closed is 0.086sec for port 80 and 0.12sec for port 22. So if you wish to change congestion control algorithm in Oracle Solaris 11, a useful step may be to see if congestion really is an issue on your network. Scripts like the one posted above can help assess this, but it's worth reiterating that if congestion control is occuring, that's not necessarily a problem that needs fixing. Recall that congestion control is about controlling flow to prevent large-scale drops, so looking at congestion events in isolation doesn't tell us the whole story. For example, are we seeing more congestion events with one control algorithm, but more drops/retransmission with another? As always, it's best to start with measures of throughput and latency before arriving at a specific hypothesis such as "my congestion control algorithm is sub-optimal".

    Read the article

  • Access Master Page Controls II

    - by Bunch
    Here is another way to access master page controls. This way has a bit less coding then my previous post on the subject. The scenario would be that you have a master page with a few navigation buttons at the top for users to navigate the app. After a button is clicked the corresponding aspx page would load in the ContentPlaceHolder. To make it easier for the users to see what page they are on I wanted the clicked navigation button to change color. This would be a quick visual for the user and is useful when inevitably they are interrupted with something else and cannot get back to what they were doing for a little while. Anyway the code is something like this. Master page: <body>     <form id="form1" runat="server">     <div id="header">     <asp:Panel ID="Panel1" runat="server" CssClass="panelHeader" Width="100%">        <center>            <label style="font-size: large; color: White;">Test Application</label>        </center>       <asp:Button ID="btnPage1" runat="server" Text="Page1" PostBackUrl="~/Page1.aspx" CssClass="navButton"/>       <asp:Button ID="btnPage2" runat="server" Text="Page2" PostBackUrl="~/Page2.aspx" CssClass="navButton"/>       <br />     </asp:Panel>     <br />     </div>     <div>         <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager>         <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server">         </asp:ContentPlaceHolder>     </div>     </form> </body> Page 1: VB Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load     Dim clickedButton As Button = Master.FindControl("btnPage1")     clickedButton.CssClass = "navButtonClicked" End Sub CSharp protected void Page_Load(object sender, EventArgs e) {     Button clickedButton;     clickedButton = (Button)Master.FindControl("btnPage1");     clickedButton.CssClass = "navButtonClicked"; } CSS: .navButton {     background-color: White;     border: 1px #4e667d solid;     color: #2275a7;     display: inline;     line-height: 1.35em;     text-decoration: none;     white-space: nowrap;     width: 100px;     text-align: center;     margin-bottom: 10px;     margin-left: 5px;     height: 30px; } .navButtonClicked {     background-color:#FFFF86;     border: 1px #4e667d solid;     color: #2275a7;     display: inline;     line-height: 1.35em;     text-decoration: none;     white-space: nowrap;     width: 100px;     text-align: center;     margin-bottom: 10px;     margin-left: 5px;     height: 30px; } The idea is pretty simple, use FindControl for the master page in the page load of your aspx page. In the example I changed the CssClass for the aspx page's corresponding button to navButtonClicked which has a different background-color and makes the clicked button stand out. Technorati Tags: ASP.Net,CSS,CSharp,VB.Net

    Read the article

  • Investigating Strategies For Functional Decomposition

    - by Liam McLennan
    Introducing Functional Decomposition Before I begin I must apologise. I think I am using the term ‘functional decomposition’ loosely, and probably incorrectly. For the purpose of this article I use functional decomposition to mean the recursive splitting of a large problem into increasingly smaller ones, so that the one large problem may be solved by solving a set of smaller problems. The justification for functional decomposition is that the decomposed problem is more easily solved. As software developers we recognise that the smaller pieces are more easily tested, since they do less and are more cohesive. Functional decomposition is important to all scientific pursuits. Once we understand natural selection we can start to look for humanities ancestral species, once we understand the big bang we can trace our expanding universe back to its origin. Isaac Newton acknowledged the compositional nature of his scientific achievements: If I have seen further than others, it is by standing upon the shoulders of giants   The Two Strategies For Functional Decomposition of Computer Programs Private Methods When I was working on my undergraduate degree I was taught to functionally decompose problems by using private methods. Consider the problem of painting a house. The obvious solution is to solve the problem as a single unit: public void PaintAHouse() { // all the things required to paint a house ... } We decompose the problem by breaking it into parts: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { // everything required to paint the undercoat } private void PaintTopcoat() { // everything required to paint the topcoat } The problem can be recursively decomposed until a sufficiently granular level of detail is reached: public void PaintAHouse() { PaintUndercoat(); PaintTopcoat(); } private void PaintUndercoat() { prepareSurface(); fetchUndercoat(); paintUndercoat(); } private void PaintTopcoat() { fetchPaint(); paintTopcoat(); } According to Wikipedia, at least one computer programmer has referred to this process as “the art of subroutining”. The practical issues that I have encountered when using private methods for decomposition are: To preserve the top level API all of the steps must be private. This means that they can’t easily be tested. The private methods often have little cohesion except that they form part of the same solution. Decomposing to Classes The alternative is to decompose large problems into multiple classes, effectively using a class instead of each private method. The API delegates to related classes, so the API is not polluted by the sub-steps of the problem, and the steps can be easily tested because they are each in their own highly cohesive class. Additionally, I think that this technique facilitates better adherence to the Single Responsibility Principle, since each class can be decomposed until it has precisely one responsibility. Revisiting my previous example using class composition: public class HousePainter { private undercoatPainter = new UndercoatPainter(); private topcoatPainter = new TopcoatPainter(); public void PaintAHouse() { undercoatPainter.Paint(); topcoatPainter.Paint(); } } Summary When decomposing a problem there is more than one way to represent the sub-problems. Using private methods keeps the logic in one place and prevents a proliferation of classes (thereby following the four rules of simple design) but the class decomposition is more easily testable and more compatible with the Single Responsibility Principle.

    Read the article

  • Why I lose certain values from array?

    - by blankon91
    I've the code to input the values dynamically, when I use it to add the values for the first time, it's fine, but when I want to edit it, the old values didn't inserted on sql query but the new values inserted Here is the example: http://i.stack.imgur.com/dwugS.jpg here is the code: ============the function================= sub ShowItemfgEdit(query,selItemName,defValue,num,cdisable) response.write "<select " & cdisable & " num=""" & num & """ id=""itemCombo"" name=""" & selItemName & """ class=""label"" onchange=""varUsage.ChangeSatuanDt(this)"">" if NOT query.BOF then query.moveFirst WHILE NOT query.EOF tulis = "" if trim(defValue) = trim(query("ckdbarang")) then tulis = "selected" end if response.write "<option value=""" & trim(query("ckdbarang")) & """" & tulis & ">" & trim(query("ckdbarang")) & " - " & trim(query("vnamabarang")) query.moveNext WEND end if response.write "</select>" end sub ============calling the function================ <td class="rb" align="left"><% call ShowItemfgEdit(qGetItemfgGrp,"fitem",qGetUsageDt("ckdfg"),countLine,readonlyfg) %></td> ==============post the value====================== <input type="hidden" name="fitem" value=""> ================get the value=================== for i = 1 to request.form("hdnOrderNum") if request.form("selOrdItem_" & i) <> "" then 'bla...blaa...blaa... ckdfg = trim(request.form("fitem_" & i)) '<==here is the problem objCommand.commandText = "INSERT INTO IcTrPakaiDt " &_ "(id, id_h, ckdunitkey, cnopakai, dtglpakai, ckdbarang, ckdgudang, nqty1, nqty2, csatuan1, csatuan2, nqtypakai, csatuanpakai, vketerangan, cJnsPakai, ckdprodkey, ckdfg, ncountstart, ncountstop, ncounttotal) " &_ " VALUES " &_ " (" & idDt & ",'" & idHd & "','" & selLoc & "','" & nopakai & "','" & cDate(request.form("hdnUsageDate")) & "','" & trim(ckdbarang) & "','" & trim(ckdgudang) & "'," & nqty1 & "," & nqty2 & ",'" & trim(csatuan1) & "','" & trim(csatuan2) & "'," & nqtypakai & ",'" & csatuanpakai & "','" & trim(keteranganItem) & "','" & trim(cjnspakai) & "','" & ckdprodkey & "','" &ckdfg& "'," & cnt1 & "," & cnt2 & "," & totalcnt & ")" set qInsertPakaiDt = objCommand.Execute end if next problem: old value of ckdfg didn't inserted to query, but the new value inserted. How to fix this bug?

    Read the article

  • How to upload a file from iPhone SDK to an ASP.NET vb.net web form using ASIFormDataRequest

    - by user289348
    Download http://allseeing-i.com/ASIHTTPRequest/. This works like a charm and is a good wrapper and handles things nicely. The make the request this way to the asp.net code listed below. Create a asp.net webpage that has a file control. IPHONE CODE: NSURL *url = [NSURL URLWithString:@"http://YourWebSite/Upload.aspx"]; ASIFormDataRequest *request = [ASIFormDataRequest requestWithURL:url]; //These two must be added. ASP.NET Looks for them, if //they are not there in the request, the file will not upload [request setPostValue:@"" forKey:@"__VIEWSTATE"]; [request setPostValue:@"" forKey:@"__EVENTVALIDATION"]; [request setFile:@"PATH_TO_Local_File_on_Iphone/file/jpg" forKey:@"fu"]; [request startSynchronous]; This is the website code <%@ Page Language="VB" AutoEventWireup="false" CodeFile="Upload.aspx.vb" Inherits="Upload" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:FileUpload ID="fu" runat="server" EnableViewState="False" /> </div> <asp:Button ID="Submit" runat="server" Text="Submit" /> </form> </body> </html> //Code behind page Partial Class Upload Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim tMarker As New EventMarkers If fu.HasFile = True Then 'fu.PostedFile fu.SaveAs("E:\InetPub\UploadedImage\" & fu.FileName) End If End Sub End Class

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >