Search Results

Search found 38284 results on 1532 pages for 'object reference'.

Page 559/1532 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Why don't we use dynamic (server-side generated) CSS?

    - by ern0
    As server-side generated HTML is trivial (and it was the only way to make dynamic webpages before AJAX), server-side generated CSS is not. Actually, I've never seen it. There are CSS compilers, but they generate CSS files which can be used as static. Technically, it requires no special libraries, the HTML style tag should reference to the PHP(/ASP/whatever) templater script instead of the static CSS file, and the script should send out CSS content-type header - that's all. Does it have cache problems? I don't think so. The script should send out no-cache etc. headers. Is it problem for designers? No, they should edit the CSS template (as they edit the HTML template). Why we don't use dynamic CSS generators? Or if there's any, please let me know.

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • Does software rot refer primarily to performance, or to messy code?

    - by Kazark
    Wikipedia's definition of software rot focuses on the performance of the software. This is a different usage than I am used to; I had thought of it much more in terms of the cleanliness and design of the code—in terms of the code's having all the standard quality characteristics: readability, maintainability, etc. Now, performance is likely to go down when the code becomes unreadable, because no one knows what is going on. But does the term software rot have special reference to performance? or am I right in thinking it refers to the cleanliness of the code? or is this perhaps a case of multiple senses of the term being in common usage—from the user's perspective, it has do with performance; but for the software craftsman, it has to do more specifically with how the code reads?

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

  • C# Interview Preparation - References?

    - by Kanini
    This is a specific question relating to C#. However, it can be extrapolated to other languages too. While one is preparing for an interview of a C# Developer (ASP.NET or WinForms or ), what would be the typical reference material that one should look at? Are there any good books/interview question collections that one should look at so that they can be better prepared? This is just to know the different scenarios. For example, I might be writing SQL Stored Procedures and Queries, but I might stumble when asked suddenly Given an Employee Table with the following column(s). EmployeeId, EmployeeName, ManagerId Write a SQL Query which will get me the Name of Employee and Manager Name? NOTE: I am not asking for a Question Bank so that I can learn by rote what the questions are and reproduce them (which, obviously will NOT work!)

    Read the article

  • Resource management question. Resource containing resource

    - by bobenko
    I have resource manager handling as usual resource loading, unloading etc. With resources such an images, mesh no problem. But what to do when I have resource containing other resource (for example spriteFont contains reference to sprite and letters description). Should that sprite be added to resource manager? Or my spriteFont must be the only owner of that resource? Any thoughts on this. Have you faced with such problem? Thanks in advance.

    Read the article

  • Going home now :-)

    - by Mike Dietrich
    3 weeks of traveling through Asia and Australia - nearly 500 customers and partners in 8 workshops in Tokyo, Seoul, Beijing, Shenzhen, Singapore, Melbourne, Perth and Manila. Great people in all places, many interesting discussions, several new reference prospects for Oracle Database 11g Release 2 - YOU should upgrade as well pretty soon :-) But now it's time to go home. We are a bit exhausted but we really enjoyed it talking to and with you. And I'd suppose we'll meet again the sooner or later. Thanks to everybody - and special thanks to the local colleagues and especially to Abe-san, Kota-san, Blair Layton and Shaheen Ismail for taking care on us, organizing our workshops and the whole setup!!!

    Read the article

  • Multiple Denial of Service vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5266 Denial of Service (DoS) vulnerability 4.3 PNG reference library (libpng) Solaris 10 SPARC: 137080-03 X86: 137081-03 Solaris 9 SPARC: 139382-02 114822-06 X86: 139383-02 Solaris 8 SPARC: 114816-04 X86: 114817-04 CVE-2007-5267 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5268 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5269 Denial of Service (DoS) vulnerability 5.0 CVE-2008-1382 Denial of Service (DoS) vulnerability 7.5 CVE-2008-3964 Denial of Service (DoS) vulnerability 4.3 CVE-2009-0040 Denial of Service (DoS) vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • unseen/unknown function definition in linux source

    - by Broncha
    Can any one please explain this piece of code I found in the linux kernel source. I see a lots of code like this in linux and minix kernel but dont seem to find what it does (even if C compilers support that kind of function definition) /* IRQs are disabled and uidhash_lock is held upon function entry. * IRQ state (as stored in flags) is restored and uidhash_lock released * upon function exit. */ static void free_user(struct user_struct *up, unsigned long flags) __releases(&uidhash_lock) { uid_hash_remove(up); spin_unlock_irqrestore(&uidhash_lock, flags); key_put(up->uid_keyring); key_put(up->session_keyring); kmem_cache_free(uid_cachep, up); } I cannot find out if this reference __releases(&uidhash_lock) before the parenthesis starts is allowed OR supported. (It sure is supported as it is in the linux kernel)

    Read the article

  • Any valid reason to Nest Master Pages in ASP.Net rather than Inherit?

    - by James P. Wright
    Currently in a debate at work and I cannot fathom why someone would intentionally avoid Inheritance with Master Pages. For reference here is the project setup: BaseProject MainMasterPage SomeEvent SiteProject SiteMasterPage nested MainMasterPage OtherSiteProject MainMasterPage (from BaseProject) The debate came up because some code in BaseProject needs to know about "SomeEvent". With the setup above, the code in BaseProject needs to call this.Master.Master. That same code in BaseProject also applies to OtherSiteProject which is just accessed as this.Master. SiteMasterPage has no code differences, only HTML differences. If SiteMasterPage Inherits MainMasterPage rather than Nests it, then all code is valid as this.Master. Can anyone think of a reason why to use a Nested Master Page here instead of an Inherited one?

    Read the article

  • Google I/O 2010 - Building your own Google Wave provider

    Google I/O 2010 - Building your own Google Wave provider Google I/O 2010 - Open source Google Wave: Building your own wave provider Wave 101 Dan Peterson, Jochen Bekmann, JD Zamfirescu Pereira, David LaPalomento (Novell) Learn how to build your own wave service. Google is open sourcing the lion's share of the code that went into creating Google Wave to help bootstrap a network of federated providers. This talk will discuss the state of the reference implementation: the software architecture, how you can plug it into your own use cases -- and how you can contribute to the code and definition of the underlying specification. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 59:03 More in Science & Technology

    Read the article

  • Setup DKIM (DomainKeys) for Ubuntu, Postfix and Mailman

    - by MountainX
    I'm running Ubuntu 12.04 with Postfix and Mailman. I want to set up DKIM. (DomainKeys Identified Mail, or DKIM is the successor to Yahoo's "DomainKeys". It incorporate's Cisco's Identified Mail). What are the steps for setting this up? Is opendkim recommended? The only reference I have is HowToForge, but I prefer to get help here (even if it is just a confirmation of the steps at that link). Actually, I think the info at HowToForge is outdated because it mentions dkim-filter instead of opendkim.

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Building TrueCrypt on Ubuntu 13.10

    - by linuxubuntu
    With the whole NSA thing people tried to re-build identically looking binaries to the ones which truecrypt.org provides, but didn't succeed. So some think they might be compiled with back-doors which are not in the source code. - So how compile on the latest Ubuntu version (I'm using UbuntuGNOME but that shouldn't matter)? I tried some tutorials for previous Ubuntu versions but they seem not to work any-more? edit: https://madiba.encs.concordia.ca/~x_decarn/truecrypt-binaries-analysis/ Now you might think "ok, we don't need to build", but: To build he used closed-source software and there are proof-of-concepts where a compromised compiler still put backdoors into the binary: 1. source without backdoors 2. binary identically to the reference-binary 3. binary contains still backdoors

    Read the article

  • How to create an Access database by using ADOX and Visual C# .NET

    - by SAMIR BHOGAYTA
    Build an Access Database 1. Open a new Visual C# .NET console application. 2. In Solution Explorer, right-click the References node and select Add Reference. 3. On the COM tab, select Microsoft ADO Ext. 2.7 for DDL and Security, click Select to add it to the Selected Components, and then click OK. 4. Delete all of the code from the code window for Class1.cs. 5. Paste the following code into the code window: using System; using ADOX; private void btnCreate_Click(object sender, EventArgs e) { ADOX.CatalogClass cat = new ADOX.CatalogClass(); cat.Create("Provider=Microsoft.Jet.OLEDB.4.0;" +"Data Source=D:\\NewMDB.mdb;" +"Jet OLEDB:Engine Type=5"); MessageBox.Show("Database Created Successfully"); cat = null; }

    Read the article

  • Using Java classes in JRuby

    - by kerry
    I needed to do some reflection today so I fired up interactive JRuby (jirb) to inspect a jar.  Surprisingly, I couldn’t remember exactly how to use Java classes in JRuby.  So I did some searching on the internet and found this to be a common question with many answers.  So I figure I will document it here in case I forget how in the future. Add it’s folder to the load path, require it, then use it! $: << '/path/to/my/jars/' require 'myjar' # so we don't have to reference it absolutely every time (optional) include Java::com.goodercode my_object = SomeClass.new

    Read the article

  • Has RFC2324 been implemented?

    - by anthony-arnold
    I know RFC2324 was an April Fools joke. However, it seems pretty well thought out, and after reading it I figured it wouldn't be out of the question to design an automated coffee machine that used this extension to HTTP. We programmers love to reference this RFC when arguing web standards ("418 I'm a Teapot lolz!") but the joke's kind of on us. Ubiquitous computing research assumes that network-connected coffee machines are probably going to be quite common in the future, along with Internet-connected fruit and just about everything else. Has anyone actually implemented a coffee machine that is controlled via HTCPCP? Not necessarily commercial, but hacked together in a garage, maybe? I'm not talking about just a web server that responds to HTCPCP requests; I mean a real coffee machine that actually makes coffee. I haven't seen an example of that.

    Read the article

  • How do I ban a wifi network in Network Manager?

    - by Chris Conway
    My wifi connection drops sometimes and, for some reason, Network Manager attempts to connect to my neighbor's network, which requires a password that I don't know. The network in question is not listed in the "Edit Connections..." dialog and I can find no reference to it in any configuration file, but still the password dialog pops up every time my main connection drops. Is there a way to blacklist a wireless network so that the Network Manager will never attempt to connect to it? Or, equivalently, how can I remove the configuration data that causes the Network Manager to attempt to connect to this particular network?

    Read the article

  • What are the files pushed to MDS?

    - by harsh.singla
    All files which are under AIAComponents will move to MDS. This contains EnterpriseObjectLibrary, EnterpriseBusinessServiceLibrary, ApplicationObjectLibrary, ApplicationBusinessServiceLibrary, B2BObjectLibrary, ExtensionServiceLibrary, and UtilityArtifacts. Also there are some common transformation (.xsl) files, which are kept under Transformations folder, moved to MDS. AIAConfigurationProperties.xml file will be there in MDS. Every cross reference (.xref) object will also be there. Every Domain value Map (.dvm) will also be there. Common fault policy, which by default included in composite during composite generation, if a user does not choose to customize fault policy. All these files are location under AIAMetaData directory and then placed in their respective folders. We are planning to put Error handling and BSR systems related data also to MDS.

    Read the article

  • SEO. dofollow trackback or nofollow trackback?

    - by Ernesto Marrero
    Thinking about SEO. It is better that they are dofollow trackback or nofollow trackback? When I write a post on my blog and automatically sends trackback to my site from http://bitacoras.com. The above link is dofollow. Is it advisable to give relevance to this link I also handing dofollow link? For example any domain. on the homepage has pagerank 3. This site has a pagerank 0 internal page that I send trackbacks. It is convenient to send dofollow link to that page to increase your pagerank. Increase my pagerank if I perform this operation ?works well ? Please appreciated.any reference in writing .

    Read the article

  • Is LightDM is broken? Autologin works

    - by Ben
    I recently upgraded my Mythbuntu box to 11.10. LightDM worked for a while and then I configured autologin on my user account. After some time, I've removed some packages to lean down the system. Now when I log out (from either Unity or Gnome Shell) I lose the X session, LightDM doesn't start back up. If I disable autologin through /etc/lightdm/lightdm.conf I get no login screen & no X. As I said, autologin works and gives me a Ubuntu Classic desktop. This is the only reference in dmesg to lightdm; [ 17.351023] type=1400 audit(1329530135.420:19): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm-guest-session-wrapper" pid=1097 comm="apparmor_parser" Can anyone help?

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • SQLAuthority News – Windows Azure Training Kit Updated October 2012

    - by pinaldave
    Microsoft has recently released the updated to Windows Azure Training Kit. Earlier this month they have updated the kit and included quite a lot of things. Now the training kit contains 47 hands-on labs, 24 demos and 38 presentations. The best part is that the kit is now available to download in two different formats 1) Full Package (324.5 MB) and 2) Web Installer (2.4 MB). The full package enables you to download all of the hands-on labs and presentations to your local machine. The Web Installer allows you to select and download just the specific hands-on labs and presentations that you need. This Windows Azure Training Kit contains Hands on Labs, Presentations and Videos and Demos. I encourage all of you to try this out as well. The Kit also contains details about Samples and Tools. The training kit is the most authoritative learning resource on Windows Azure. You can download the Windows Azure Training Kit from here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Azure, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fixing LINQ Error: Sequence contains no elements

    - by ChrisD
    I’ve read some posts regarding this error when using the First() or Single() command.   They suggest using FirstOrDefault() or SingleorDefault() instead. But I recently encountered it when using a Sum() command in conjunction with a Where():   Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).Max(p => p.Amount);   When the Where() function eliminated all the items in the policies collection, the Sum() command threw the “Sequence contains no elements” exception.   Inserting the DefaultIfEmpty() command between the Where() and Sum(), prevents this error: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p => p.Amount);   but now throws a Null Reference exception!   The Fix: Using a combination of DefaultIfEmpty() and a null check in the Sum() command solves this problem entirely: Code Snippet var effectiveFloor = policies.Where(p => p.PricingStrategy == PricingStrategy.EstablishFloor).DefaultIfEmpty().Max(p =>  p==null?0 :p.Amount);

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >