Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 29/1008 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Next Post...

    - by James Michael Hare
    The next post on the concurrent collections will be next Monday.  I'm a little behind from my Topeka trip earlier this week, so sorry about the delay! Also, I was thinking about starting a C++ Little Wonders series as well.  Would anyone have an interest in that topic?  I primarily use C# in my development work, but there is still a lot of legacy C++ I work on as well and could share some tips & tricks.

    Read the article

  • Is SOAP Http POST more complicated than I thought

    - by Pete Petersen
    I'm currently writing a bit of code to send some xml data to a web service via HTTP POST. I thought this would be really simple and have written the following example code (C#) Console.WriteLine("Press enter to send data..."); while (Console.ReadLine() != "q") { HttpWebRequest httpWReq = (HttpWebRequest)WebRequest.Create(@"http://localhost:8888/"); Foo fooItem = new Foo { Member1 = "05", Member2 = "74455604", Member3 = "15101051", Member4 = 1, Member5 = "fsf", Member6 = 6.52, }; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = fooItem.ToXml(); byte[] data = encoding.GetBytes(postData); httpWReq.Method = "POST"; httpWReq.ContentType = "application/xml"; httpWReq.ContentLength = data.Length; using (Stream stream = httpWReq.GetRequestStream()) { stream.Write(data, 0, data.Length); } HttpWebResponse response = (HttpWebResponse)httpWReq.GetResponse(); string responseString = new StreamReader(response.GetResponseStream()).ReadToEnd(); Console.WriteLine("Received " + responseString); Console.WriteLine("Press enter to send data..."); } This is all I thought would be necessary, however I have now been given the details for the web service. This included some information which is unfarmiliar to me and I'm unsure whether I need to include it. The information I was sent was <url>http://sometext/soap/rpc</url> <namespace>http://sometext/a.services</namespace> <method>receiveInfo</method> <parm-id>xmldata</parm-id> (Input data) (Actual XML data as string) <parm-id>status</parm-id> (Output data) <userid>user</userid> <password>pass</password> <secure>false</secure> I guess this means I need to include a username and password somehow, but I'm not sure what the namespace or method fields are used for. Could anyone give me a hint? Sorry I've never used webservices before.

    Read the article

  • It has been a long time since last post

    - by The Official Microsoft IIS Site
    Wow, just realized that in the last 6 months I’ve only had a chance to post 2 items and I think it is about time to start this going again. So why this much silence? Well, About 8 months ago a couple of big changes happened at my division as described in this link . As part of that transition my responsibilities changed and I transitioned from being the Development Manager for the Web Platform (IIS, WebMatrix, WebDeploy, etc…) to take a new role and start a new team that we called Azure UX team....(read more)

    Read the article

  • Uploading images to a post like on the stackexchange websites [on hold]

    - by Loko
    Stackexchange uses imgur link to show the image to their website, but how do I do this? I am really curious on what ways I can upload an image to a file and show it immediatly again. Like webshops where you can post your products are also uploading images to show them immediatly. What way are there to do this and what is the best / most secure?(I assume how stackexchange does it), but I also have no idea how to do that.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • PPF Savings Interest Rate Increased To 8.6% From 8% [India]

    - by Gopinath
    Here is some good news to small Indian investors who save money in Public Provident Fund(PPF) accounts operated in Post Offices and few nationalized banks – returns of your PPF savings are going to increase. Indian government has decided to increase the rate of interest paid to customers from 8% to 8.6%. To put it in numbers, if you have a 2,00,000 of savings in PPF account they are going give returns of 17,200/- per annum compared to 16,000/- returns at 8%. Also the the maximum cap on the investments in to PPF account per annum is increased to 1,00,000 from 70,000/-. PPF is one of the safest debt investments that gives very decent returns, but if you are a salaried employee with PF account then consider investing in Voluntary PF(VPF) instead of PPF as VPF returns are higher than PPF. CC image credit: Dave Dugdale. This article titled,PPF Savings Interest Rate Increased To 8.6% From 8% [India], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • wordpress add post validation

    - by dskanth
    This is embarrassing, but yet i am surprised to see that there is no validation by default while adding a new post in wordpress. When i don't enter a title and even content, and just hit Publish, it says that the post is published, and when i view the front end, there is no new post. How could wordpress skip the simple validation for adding a post? Atleast i expected a server side validation (if not client side). Don't know why the validation is skipped. It is upto wordpress, whether they incorporate it in the new versions. But i want to know how can i add a javascript (or jquery) validation for adding a post in wordpress. I know it must not at all be difficult. But being new to wordpress, i would like to get some hint. From Firebug, i could see the form is rendering like: <form id="post" method="post" action="post.php" name="post"> ... </form> Where shall i put my javascript validation code?

    Read the article

  • New XEN Server, Intel i7, Errors were encountered while processing: xen-linux-system-amd64

    - by Sheldon
    I have just got a new machine to run XEN VM's on, it has an Intel i7 processor: - Intel Haswell Core i7-4790 3.6GHz 8MB LGA1150 I have setup the host with the current 6.2.0 I have set up a new Debian 7 64bit VM and any package I try and run fails with the following errors: Errors were encountered while processing: xen-utils-common xen-utils-4.1 xen-system-amd64 xen-linux-system-3.2.0-4-amd64 xen-linux-system-amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) Excuse my noob-ness but should it even be running an AMD package ? Any ideas on how to fix this ? Thanks

    Read the article

  • bash completion processing gone bad, how to debug?

    - by msw
    It all started with a simple alias gv='gvim --remote-quiet' and now gv Space Tab gives nothing where it normally should give filenames. Oddly, alias gvi='gvim --remote-quiet' works as expected. I clearly have a workaround, but I'd like to know what is catching my gv for special processing. compopt is no help as gv shares the same settings as ls which does filename completion correctly. $compopt gv compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs gv $ compopt ls compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs ls The complete command is slightly more helpful but it doesn't tell me why my two characters got singled out for alteration: $ complete -p gv complete -o filenames -F _filedir_xspec gv $ complete -p ls complete -o filenames -F _longopt ls $ complete -p echo bash: complete: echo: no completion specification $ alias gvi='gvim --remote-silent' msw@tallguy:~/.gnupg$ complete -p gvi bash: complete: gvi: no completion specification Where did complete -o filenames -F _filedir_xpec gv come from?

    Read the article

  • Which databases support parallel processing across multiple servers?

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum Which other engines have this feature? Do you have any experience with using this feature? Edit: I have now proposed a method for creating one myself. Any input is welcome. Edit: I have found another one: Informix Extended Parallel Server

    Read the article

  • What kind of CPU/GPU integration is offered by APUs?

    - by clabacchio
    I'm truly fascinated by the idea of GPGPU and using the GPU for heavy processing. I'm seeing that also APUs (Accelerated Processing Units, CPU+GPU on the same chip) are gaining a consistent popularity. Are all of the APUs using a GPGPU? Can it be used for processing? And is it seamless or it requires special code (like Cuda) to have the hard work made by the GPU? I'm not interested in bare graphic performance, but more about how much the GPU can accelerate the "normal" CPU work.

    Read the article

  • Blackberry Won't Sync: Says Processing

    - by Noah
    I have a blackberry storm 2 and I have it set up to sync, just like I've done with all the other ones in the company. They pull the address book/contacts from outlook, and aren't synced to an exchange server. I have everything setup and then when I hit synchronize it flashes "processing" the phone shows it trying to sync and then it is done in less than a second. It did warn me that the computer was not supplying enough power to charge the device and it said that I should make sure the drivers are correct. Also, I was warned that the device might not function properly down in the bottom right hand corner. Any ideas? I've uninstalled and reinstalled the blackberry software off the disc.

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • Building a workstation computer for Image processing? [closed]

    - by echolab
    I am taking a gigapixel image my goal is 50gigapixel and shooting is almost done , i am doing some research to build a workstation so i can stitch images together , my questions is ! Could u suggest some dual cpu mainboard that works fine with xeon 5500+ , with 64GB+ ram support ? My other question is which hardware is most important in image processing , all i see in story of gigapixel panoramas is they have dual xeon and 32gb+ ram ? i wonder if i am doing this right , i mean they don't post information on graphic card , mainboard and stuff ! I did asked several websites , but nothing best answer was get some high-end workstation and plenty of hours , i don't want to purchase ready to use workstations, i wanna build it up Thanks in advance

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Rails & Twilio: Receiving nil when storing texts received from Twilio

    - by Jon Smooth
    I have set up the request URL in my Twilio account to have it POST to: myurl.com/receivetext. It appears to be successfully posting because when I check the database using the Heroku console I see the following: Post id: 5, body: nil, from: nil, created_at: "2012-06-14 17:28:01", updated_at: "2012-06-14 17:28:01" Why is it receiving nil for the body and from attributes? I can't figure out what I'm doing wrong! The created and updated at are storing successfully but the two attributes that I care about continue to be stored as nil. Here's the Receive Text controller which is receiving the Post request from Twilio: class ReceiveTextController < ApplicationController def index @post=Post.create!(body: params[:Body], from: params[:From]) end end EDIT: When I dump the params I receive the following: "{\"controller\"=\"receive_text\", \"action\"=\"index\"}" I attained this by inserting the following into my ReceiveText controller. @params = Post.create!(body: params.inspect, from: "Dumping Params") and then opening up the Heroku console to find the database entry with from = "Dumping Params". I simulated a Twilio request with a curl with the following command curl -X POST myurl.com/receivetext route -d 'AccountSid=AC123&From=%2B19252411234' I checked the production database again and noticed that the curl request did work when obtaining the FROM atribute. It stored the following: params.inspect returned "{\"AccountSid\"=\"AC123\", \"From\"=\"+19252411234\", \"co..." I received a comment stating: "As long as twilio is hitting the same URL with the same method (GET/POST) it should be filling the params array as well" I have no idea how to make this comment actionable. Any help would be greatly appreciated! I'm very new to rails. Here's my database migration (I have both attributes set to string. I have tried setting it to text and that didn't work either) : class CreatePosts < ActiveRecord::Migration def change create_table :posts do |t| t.string :body t.string :from t.timestamps end end end Here is my Post model: class Post < ActiveRecord::Base attr_accessible :body, :from end Routes (everything appears to be routing just fine) : MovieApp::Application.routes.draw do get "receive_text/index" get "pages/home" get "send_text/send_text_message" root to: 'pages#home' match '/receivetext', to: 'receive_text#index' match '/pages/home', to: 'pages#home' match '/sendtext', to: 'send_text#send_text_message' end Here's my gemfile (incase it helps) source 'https://rubygems.org' gem 'rails', '3.2.3' gem 'badfruit' gem 'twilio-ruby' gem 'logger' gem 'jquery-rails' group :production do gem 'pg' end group :development, :test do gem 'sqlite3' end group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3' end

    Read the article

  • Wordpress auto excerpts from post content?

    - by Scott B
    I'm trying to create an auto generated post excerpt from the current page's post content using a function in my theme's header file. The post excerpt will be used as the page's meta description. Can someone give me an idea of how you might go about this once you've got the post content into a string variable? The somewhat tricky part is that, in order to predict a viable stopping point for the post excerpt, I'd like to specify that the cutoff point be the end of the first paragraph of text. And for that reason, it does not make sense to load the entire post content into the string I'm using. Can I grab the first paragraph without having to load the entire post content string? And I'm not certain how to test for that in php. Would regex be the only way?

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Evolution Of High Definition TV Viewing

    - by Gopinath
    The following guest post is written by Rob, who is also blogging on entertainment technology topics on iwantsky.com Gone are the days when you need to squint to be able to see the emotions on the faces of Humphrey Bogart and Ingrid Bergman as the lovers bid each other adieu in the classic film Casablanca. These days, watching an ordinary ant painstakingly carry a leaf in Animal Planet can be an exhilarating experience as you get to see not only the slightest movement but also the demarcation line between the insect’s head, thorax and abdomen. The crystal clear imagery was made possible by the sharp minds and the tinkering hands of the scientists that have designed the modern world’s HDTV. What is HDTV and what makes people so agog to have this new innovation in TV watching? HDTV stands for High Definition TV. Television viewing has indeed made a big leap. From the grainy black and whites, TV viewing had moved to colored TVs, progressed to SD TVs and now to HDTV. HDTV is the emerging trend in TV viewing as it delivers bigger and clearer pictures and better audio. Viewers can have a cinema-like TV viewing experience right in the comforts of their own home. With HDTV the viewer is allowed to have a better viewing range. With Standard (SD) TV, the viewer has to be at a distance that is from 3 to 6 times the size of the screen. HDTV allows the viewer to enjoy sharper and clearer images as it is possible to sit at a distance that is 1.5 or 3 times the size of the screen without noticing any image pixilation. Although HDTV appears to be a fairly new innovation, this system has actually existed in various forms years ago. Development of the HDTV was started in Europe as early as 1940s. However, the NTSC and the PAL/SECAM, the two analog TV standards became dominant and became popular worldwide. The analog TV was replaced by the digital TV platform in the 1990s. Even during the analog era, attempts have been made to develop HDTV. Japan has come out with MUSE system. However, due to channel bandwidth requirement concerns, the program was shelved. The entry of four organizations into the HDTV market spurred the development of a beneficial coalition. The AT&T, ATRC, MIT and Zenith HDTV combined forces. In 1993, a Grand Alliance was formed. This group is composed of researchers and HDTV manufacturers. A common standard for the broadcast system of HDTV was developed. In 1995, the system was tested and found successful. With the higher screen resolution of HDTV, viewing has never been more enjoyable. [Image courtesy: samsung] This article titled,Evolution Of High Definition TV Viewing, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • RUIN: A Post-Apocalyptic Short Animation [Video]

    - by Jason Fitzpatrick
    If your coffee has failed to perk you up this morning, this action-packed post-apocalyptic animation–a trailer for a work-in-progress CGI movie–most certainly will. Courtesy of Oddball Animation, RUIN is a polished bit of animation that could easily stand alone as a short film.  The studio is in the process of shopping it around to extend it into a full length movie which, if it looks as good as it does in the short form, will be worth the price of admission. RUIN [via Neatorama] The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • My First Post @ geekswithblogs

    - by sathya
    Dear Friends, Here is my first post on geekswithblogs. I am happy that I have got a separate space here to blog. I am an MCTS certified Professional in .Net 2.0 Web applications, working as a Senior Software Engineer. Willing to share my knowledge on all topics whatever I know. I am also an active presenter / speaker in Microsoft Developer User Group HyderabadTechies. And I have presented many online sessions there. I keep myself updated on the latest technologies in Microsoft. You can see my posts here on the following subjects : C# ASP.NET SQL Server SQL Server Integration Services (SSIS) SQL Server Analysis Services (SSAS) SQL Server Reporting Services (SSRS) I have a personal blog too where I share my knowledge. Pls take a note of it. http://cybersathya.blogspot.com You can see me here often posting the updates on technologies and the technical challenges that I faced and the solutions for the same. Stay Tuned !!! Regards Sathya Narayanan Srinivasan

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News Guest Post FAULT Contract in WCF with Learning Video

    This is guest post by one of my very good friends and .NET MVP, Dhananjay Kumar. The very first impression one gets when they meet him is his politeness. He is an extremely nice person, but has superlative knowledge in .NET and is truly helpful to all of us. Objective: This article will give a [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Silverlight Group's First Blog Post

    - by TheSilverlightGroup
    Welcome to The Silverlight Group's first blog post! First of all, we just want to introduce ourselves. The Silverlight Group is a new Microsoft vendor company whose primary officers are David Silverlight himself as Chief Software Architect & Kim Schmidt as "Connection String", a form of CEO. So, for a simple introduction, there you have it We will be updating this blog on a regular basis, so please visit us often & share your thoughts with us, as we will with you. Thanks for visiting us while we get set up!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >