Search Results

Search found 27946 results on 1118 pages for 'output buffer empty'.

Page 650/1118 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • slow query logging in mysql server

    - by Vinayak Mahadevan
    Hi I have installed MySQL Community Edition 5.1.41 on a windows 2000 server. In my.ini file I have enabled slow query logging and have redirected the output to a table.I have set the long_query_time to 10 seconds. Then after running some queries I checked up the slow query log table and found that all the queries which were executed have been logged and a file called database-slow.log has also been created in the data folder. Can anybody please tell me where I am going wrong. I am using the inbuilt innodb and not activated the innodb plugin. Thanks

    Read the article

  • Accessing Server-Side Data from Client Script: Accessing JSON Data From an ASP.NET Page Using jQuery

    When building a web application, we must decide how and when the browser will communicate with the web server. The ASP.NET WebForms model greatly simplifies web development by providing a straightforward mechanism for exchanging data between the browser and the server. With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. On postback, the server sends the entire contents of the web page back to the browser, which then displays this new content. With WebForms we don't need to spend much time or effort thinking about how or when the browser will communicate with the server or how that returned information will be processed by the browser. It just works. While this approach certainly works and has its advantages, it's not without its drawbacks. The primary concern with postback forms is that they require a large amount of information to be exchanged between the browser and the server. Specifically, the browser sends back all of its form fields (including hidden ones, like view state, which may be quite large) and then the server sends back the entire contents of the web page. Granted, there are scenarios where this large quantity of data needs to be exchanged, but in many cases we can use techniques that exchange much less information. However, these techniques necessitate spending more time and effort thinking about how and when to have the browser communicate with the server and intelligently deciding on what information needs to be exchanged. This article, the first in a multi-part series, examines different techniques for accessing server-side data from a browser using client-side script. Throughout this series we will explore alternative ways to expose data on the server so that it can be accessed from the browser using script; we will also examine various tools for communicating with the server from JavaScript, including jQuery and the ASP.NET AJAX library. Read on to learn more! Read More >

    Read the article

  • Resizing an iscsi LUN on an EMC under Linux RHEL 5

    - by niXar
    I expanded a LUN on my SAN (from 30G to 50G), and tried all kind of voodoo but it looks like the system still believes the device is still small, despite showing me the new size alright. # blockdev --getsize64 /dev/sdl 53687091200 # dd if=/dev/sdl skip=$(( 31*1024*1024*1024/512 )) count=1|hexdump -C dd: reading `/dev/sdl': Input/output error The dd line works for blocks below the 30G original size. If I reboot, the problem goes away, but I don't want to since the machine has a dozen VMs running. I tried various magic, such as: iscsiadm -m node -R -I iface0 ... to no avail. Note: I'm not using multipath.

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Gain Quick Access to the Cache in Firefox

    - by Asian Angel
    Are you looking for a quick and simple way to view the contents of the cache in Firefox? Then you will definitely want to see how easy it can be using the CacheViewer extension. Note: CacheViewer is a front-end app for easily accessing and searching the memory cache. Before Viewing the cache in Firefox using “about:cache” provides some information about the contents but may not be the most efficient method available for some people. CacheViewer in Action Once you have installed the extension there are three easy ways to access your new cache viewer. The first is using the “CacheViewer Command” available in the “Tools Menu” and the second is using the keyboard shortcut “Ctrl + Shift + C”. The third way is by adding a “Toolbar Button” to your browser’s UI. All three work equally well…choose the method that best suits your personal needs. When you access the “CacheViewer Window” this is what it will look like. You may decide to resize it and move (or hide) some of the columns for the best viewing. You can easily scroll through the cache contents and preview images if desired as shown here. If you keep the “CacheViewer Window” open you can refresh it as you browse using the “Refresh Button” in the lower right corner. This is a nice, quick, and very simple way to access the cache on demand and save items to your hard-drive if desired. Note: The “CacheViewer” can also be set to open in a new tab instead (see “Options”). Options Choose whether “CacheViewer” opens in a separate window (default) or in a new tab. Conclusion If you want a quick and simple way to view the cache in Firefox then the CacheViewer extension is just what you have been looking for. Link Download the CacheViewer extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Add a Cache Clearing Button to FirefoxSearch for Install Packages from the Ubuntu Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Internet Explorer Cache Files the Easy WayQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Out of band Security Update for Internet Explorer 7 Cool Looking Screensavers for Windows SyncToy syncs Files and Folders across Computers on a Network (or partitions on the same drive) If it were only this easy Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook

    Read the article

  • How to get headphones and speakers working at the same time?

    - by Borek
    I have a Gigabyte P55-UD3 motherboard which comes with a Realtek sound card and audio header for the front panel. My headset is permanently connected to the front panel (mic in, headphones in) and I also have speakers connected at the rear of the computer. What I want to achieve is that both the headphones and the speakers work at the same time. On my old PC with Realtek, I was able to do this by clicking "Device advanced settings" in the Realtek HD Audio Manager and then clicking "Make front and rear output devices playback two different audio streams simultaneously" and then doing some reassigning of front/rear settings in the Realtek Audio Manager (can't remember the details). But that doesn't seem to be working now - either the speakers play sounds or the headphones, not both at the same time. To pose the question differently, Windows plays the sounds on the "default device". I'd like to make both headphones and the speakers kind of like the default devices.

    Read the article

  • VoteCounts: bookmarklet to display up/down votes even for rep<1k

    - by SztupY
    Screenshot / Code Snippet About This small bookmarklet will allow anyone to use the "vulnerability" of the API that it allows you to check the up/down vote count - a feat you could normally achieve by being a 1k+rep user. Mainly useful for sites where you don't have this amount of rep, but want to check the stats of the more controversial questions (usually on meta) No API key is actually used here, but it's trivial to add one. License I don't think a code like this deserves anything other than WTFPL Download It's the following line (javascript - 375 bytes): javascript:(function(){a='jsonp';c=' .vote-count-post';d='up_vote_count';e='down_vote_count';$.ajax({url:document.location.href.replace(/(http:\/\/)(.*)(\/questions\/.*)\/.*/,'$1api.$2/1.0$3'),dataType:a,jsonp:a,success:function(x){b=x.questions[0];$('#question'+c).html(b[d]+"-"+b[e]);$.each(b.answers,function(z,y){$('#answer-'+y.answer_id+c).html(y[d]+"-"+y[e])})}})})() EDIT: This is longer, but it will make the result look like exactly on SO. Took a while to make it exactly 508 chars, so it works with IE too. javascript:(function(){w=function(t,q){l='_vote_count';h='up'+l;j='down'+l;k='</div>';s='<div style="color:';$(t).html(s+'green">'+(q[h]?'+':'')+q[h]+k+'<div class="vote-count-separator">'+k+s+'maroon">'+(q[j]==0?'':'-')+q[j]+k)};a='jsonp';c=' .vote-count-post';$.ajax({url:document.location.href.replace(/(http:\/\/)(.*)(\/questions\/.*)\/.*/,'$1api.$2/1.0$3'),dataType:a,jsonp:a,success:function(x){b=x.questions[0];w('#question'+c,b);$.each(b.answers,function(z,y){w('#answer-'+y.answer_id+c,y)})}})})() Platform For any jquery/bookmarklets compatible browser. Tested with Chrome, FF3.6 and IE8 for SU,SO,MSO Contact sztupy.hu Code It was written in notepad already in minified form. Used firebug to debug. Code is above. Contribute(=decrease code size or make the output nicer) any way you want. I'd be great if you'd do the second code shorter than 508 bytes. Known bugs If a question has more than 30 answers then some of the answers won't be resolved. This can be solved easily for <=100 answers, but for questions with more than 100 answers this is more difficult EDIT: updated to API version 1.0. Answers doesn't work yet.

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • Unity DontDestroyOnLoad causing scenes to stay open

    - by jkrebsbach
    Originally posted on: http://geekswithblogs.net/jkrebsbach/archive/2014/08/11/unity-dontdestroyonload-causing-scenes-to-stay-open.aspxMy Unity project has a class (ClientSettings) where most of the game state & management properties are stored.  Among these are some utility functions that derive from MonoBehavior.  However, between every scene this object was getting recreated and I was losing all sorts of useful data.  I learned that with DontDestroyOnLoad, I can persist this entity between scenes.  Super.Persisting information between scenesThe problem with adding DontDestroyOnLoad to my "ClientSettings" was suddenly my previous scene would stay alive, and continue to execute its update routines.  An important part of the documentation helps shed light to my issues:"If the object is a component or game object then its entire transform hierarchy will not be destroyed either."My ClientSettings script was attached to the main camera on my first scene.  Because of this, the Main Camera was part of the hierarchy of the component, and therefore was also not able to destroy when switching scenes.  Now the first scene's main camera Update routine continues to execute after the second scene is running - causing me to have some very nasty bugs.Suddenly I wasn't sure how I should be creating a persistent entity - so I created a new sandbox project and tested different approaches until I found one that works:In the main scene: Create an empty Game Object:  "GameManager" - and attach the ClientSettings script to this game object.  Set any properties to the clientsettings script as appropriate.Create a prefab, using the GameManager.Remove the Game Object from the main scene.In the Main Camera, I created a script:  Main Script.  This is my primary script for the main scene.<code> public GameObject[] prefabs; private ClientSettings _clientSettings; // Use this for initialization void Start () { GameObject res = (GameObject)Instantiate(prefabs[0]); }</code>Now go back out to scene view, and add the new GameManager prefab to the prefabs collection of MainScript.When the main scene loads, the GameManager is set up, but is not part of the main scene's hierarchy, so the two are no longer tied up together.Now in our second scene, we have a script - SecondScript - and we can get a reference to the ClientSettings we created in the previous scene like so:<code>private ConnectionSettings _clientSettings; // Use this for initialization void Start () { _clientSettings = FindObjectOfType<ConnectionSettings> (); }</code>And the scenes can start and finish without creating strange long-running scene side effects.

    Read the article

  • SQL Developer: Why Do You Require Semicolons When Executing SQL in the Worksheet?

    - by thatjeffsmith
    There are many database tools out there that support Oracle database. Oracle SQL Developer just happens to be the one that is produced and shipped by the same folks that bring you the database product. Several other 3rd party tools out there allow you to have a collection of SQL statements in their editor and execute them without requiring a statement delimiter (usually a semicolon.) Let’s look at a quick example: select * from scott.emp select * from hr.employees delete from HR_COPY.BEER where HR_COPY.BEER.STATE like '%West Virginia% In some tools, you can simply place your cursor on say the 2nd statement and ask to execute that statement. The tool assumes that the blank line between it and the next statement, a DELETE, serves as a statement delimiter. This is not bad in and of itself. However, it is very important to understand how your tools work. If you were to try the same trick by running the delete statement, it would empty my entire BEER table instead of just trimming out the breweries from my home state. SQL Developer only executes what you ask it to execute You can paste this same code into SQL Developer and run it without problems and without having to add semicolons to your statements. Highlight what you want executed, and hit Ctrl-Enter If you don’t highlight the text, here’s what you’ll see: See the statement at the cursor vs what SQL Developer actually executed? The parser looks for a query and keeps going until the statement is terminated with a semicolon – UNLESS it’s highlighted, then it assumes you only want to execute what is highlighted. In both cases you are being explicit with what is being sent to the database. Again, there’s not necessarily a ‘right’ or ‘wrong’ debate here. What you need to be aware of is the differences and to learn new workflows if you are moving from other database tools to Oracle SQL Developer. I say, when in doubt, back away from the tool, especially if you’re in production. Oh, and to answer the original question… Because we’re trying to emulate SQL*Plus behavior. You end statements in SQL*Plus with delimiters, and the default delimiter is a semicolon.

    Read the article

  • AS3: StageWidth for BOX2D?

    - by Gabriel Meono
    I know BOX2D uses meters, and AS3 uses pixels. I'm trying to create objects which are limited to the stageWidth. If I do this variable: for (var i:int = 0; i<(stage.stageWidth); i++){...} The animation will freeze, and this output appears: TypeError: Error #1009: Cannot access a property or method of a null object reference. at Box2D.Collision::b2BroadPhase/CreateProxy() at Box2D.Collision.Shapes::b2Shape/CreateProxy() at Box2D.Dynamics::b2Body/CreateShape() at com.actionsnippet.qbox.objects::CircleObject/build() at com.actionsnippet.qbox::QuickObject/init() at com.actionsnippet.qbox::QuickObject() at com.actionsnippet.qbox.objects::CircleObject() at com.actionsnippet.qbox::QuickBox2D/create() at com.actionsnippet.qbox::QuickBox2D/addCircle() at BOX2D_Test_Tutorial_fla::MainTimeline/frame1() Does anyone know how to fix this? Full Code: [SWF(width = 350, height = 600, frameRate = 60)] import com.actionsnippet.qbox.*; var sim:QuickBox2D = new QuickBox2D(this); sim.createStageWalls(); // make a heavy circle sim.addCircle({x:3, y:3, radius:0.4, density:1}); // create a few platforms // make pins for (var i:int = 0; i<(stage.stageWidth); i++){ //End sim.addCircle({x:1 + i * 1.5, y:18, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:17, radius:0.1, density:0}); sim.addCircle({x:1 + i * 1.5, y:16, radius:0.1, density:0}); sim.addCircle({x:2 + i * 1.5, y:15, radius:0.1, density:0}); //Mid end sim.addCircle({x:0 + i * 2, y:14, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:13, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:12, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:11, radius:0.1, density:0}); sim.addCircle({x:0 + i * 2, y:10, radius:0.1, density:0}); } sim.start(); sim.mouseDrag();

    Read the article

  • Problem with SLATEC routine usage with gfortran

    - by user39461
    I am trying to compute the Bessel function of the second kind (Bessel_y) using the SLATEC's Amos library available on Netlib. Here is the SLATEC code I use. Below I have pasted my test program that calls SLATEC routine CBESY. PROGRAM BESSELTEST IMPLICIT NONE REAL:: FNU INTEGER, PARAMETER :: N = 2, KODE = 1 COMPLEX,ALLOCATABLE :: CWRK (:), CY (:) COMPLEX:: Z, ci INTEGER :: NZ, IERR ALLOCATE(CWRK(N), CY(N)) ci = cmplx (0.0, 1.0) FNU = 0.0e0 Z = CMPLX(0.3e0, 0.4e0) CALL CBESY(Z, FNU, KODE, N, CY, NZ, CWRK, IERR) WRITE(*,*) 'CY: ', CY WRITE(*,*) 'IERR: ', IERR STOP END PROGRAM And here is the output of the above program: CY: ( 5.78591091E-39, 5.80327020E-39) ( 0.0000000 , 0.0000000 ) IERR: 4 Ierr = 4 meaning there is some problem with the input itself. To be precise, the IERR = 4 means the following as per the header info in CBESY.f file: ! IERR=4, CABS(Z) OR FNU+N-1 TOO LARGE - NO COMPUTA- ! TION BECAUSE OF COMPLETE LOSSES OF SIGNIFI- ! CANCE BY ARGUMENT REDUCTION Clearly, CABS(Z) (which is 0.50) or FNU + N - 1 (which is 1.0) are not too large but still the routine CBESY throws the error message number 4 as above. The CY array should have following values for the argument given in above code: CY(1) = -0.4983 + 0.6700i CY(2) = -1.0149 + 0.9485i These values are computed using Matlab. I can't figure out what's the problem when I call CBESY from SLATEC library. Any clues? Much thanks for the suggestions/help. PS: if it is of any help, I used gfortran to compile, link and then create the SLATEC library file ( the .a file ) which I keep in the same directory as my test program above. shell command to execute above code: gfortran -c BesselTest.f95 gfortran -o a *.o libslatec.a a GD.

    Read the article

  • Are there any font rendering libraries for games development that support hinting?

    - by Richard Fabian
    I've used angel code's bitmap font generator quite a bit and though it's very good, I wondered if there would be a way of using the hinting information to provide a better readable result by using hinting to provide differing thickness based on size/pixel coverage. I imagine any solution would have to use the distance field tech presented in the valve paper on smoothing fonts while maintaining or reducing asset size. (http://www.gamedev.net/community/forums/topic.asp?topic_id=494612) but I haven't found any demos of it being used with hinting information turned on or included in the field gradients in any way. Another way of looking at this is whether there are any font bitmap generators that will output mipmaps that still maintain their readability in the face of pixel size. I think the lower mip levels would try to guarantee fill and space where it is necessary to maintain readability/topology over maintaining style/form (the point of hinting). In response to "Is there a reason you can't just render the size you want", the problem lies in the fact that font rasterisers currently don't render in 3D, and hinting information would be important in different amounts due to the pixel density being different along different axes, even differing in importance along the length of a string due to the size reducing over distance. For example, I only want horizontal hinting in a texture that is viewed from the side, and only really want vertical hinting in a font that is viewed from below or above. This isn't meant to be a renderer that tries to render a perfect outline as accurately as possible, as hinting distorts the reality of the font, instead this is meant to be a rendering solution for quite static scenes, but scenes that have 3D transformed and warped text layout. In this case the legibility is important, more important than the accuracy of representation of the polygon shape.

    Read the article

  • Toshiba Satellite laptop connected to HDTV

    - by VANJ
    I have a Toshiba Satellite A505-S6014 laptop running Windows 7 64-bit connected to a Toshiba Regza 42" HDTV with an HDMI cable. The laptop has a 16" display and the screen resolution is set to the maximum/recommended of 1366x768. The display output is set to "LCD+HDMI". The display looks fine on the laptop screen but on the TV it is not a "full screen" display, it leaves a good 3" black border all around all 4 edges. When I switch the display to "HDMI only", it is now too big for the TV screen and some of my desktop icons are no longer visible off to the side. What is the best way to set this up? I guess that since a 16" and 42" displays have different native resolutions, a LCD+HDMI mode is defaulting to the optimal size for the 16". But when I set it to HDMI Only, what is the appropriate resolution for a 42" full screen display? Thanks

    Read the article

  • hp pavilion g6 1250 with a BCM 4313 doesn't see any wireless networks

    - by Ahmed Kotb
    i have tried using ubuntu 10.04 and ubuntu 11.10 and both have the same problem the driver is detected by the additional propriety drivers wizard and after installation, ubuntu can't see except on wireless network which is not mine (and i can't connect to it as it is secured) There are plenty of wireless networks around me but ubuntu can't detect them and if i tried to connect to one of them as if it was hidden connection time out. the command lspci -nvn | grep -i net gives 04:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) 05:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 05) iwconfig gives lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off i guess it is something related to Broadcom driver .. but i don't know , any help will be appreciated UPDATE: ok i installed a new copy of 11.10 to remove the effect of any trials i have made i followed the link (http://askubuntu.com/q/67806) as suggested all what i have done now is trying the command lsmod | grep brc and it gave me the following brcmsmac 631693 0 brcmutil 17837 1 brcmsmac mac80211 310872 1 brcmsmac cfg80211 199587 2 brcmsmac,mac80211 crc_ccitt 12667 1 brcmsmac then i blacklisted all the other drivers as mentioned in the link the wireless is still disabled.. in the last installation installing the Brodcom STA driver form the additional drivers enabled the menu but as i have said before it wasn't able to connect or even get a list of available networks so what should i do now ? the output of command rfkill list all rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • Windows 8 recimg createimage fails with 0x80070002, why?

    - by ezuk
    Running Win8 x64. C is an SSD with lots of free space (200GB). I run: recimg /createimage C:\CustomRefreshImages\Img121105 Output says: Source OS location: C: Recovery image path: C:\CustomRefreshImages\Img121105\CustomRefresh.wim Creating recovery image. Press [ESC] to cancel. Initializing The recovery image cannot be written. Error Code - 0x80070002 The paths specified in the command line are created, and I'm running this in an elevated command prompt, so it's not a permissions issue. I've googled this but could only find worthless results, including a particularly entertaining one where the "solution" included instructions for creating a screenshot (press PrtScreen). Any help would be most appreciated.

    Read the article

  • The remote host closed the connection. The error code is 0x80070057

    - by Jalpesh P. Vadgama
    While creating a PDF or any file with asp.net pages I was getting following error. Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, Byte[] header, Int32 keepConnected, Int32 totalBodySize, Int32 numBodyFragments, IntPtr[] bodyFragments, Int32[] bodyFragmentLengths, Int32 doneWithSession, Int32 finalStatus, Boolean& async) at System.Web.Hosting.ISAPIWorkerRequest.FlushCachedResponse(Boolean isFinal) at System.Web.Hosting.ISAPIWorkerRequest.FlushResponse(Boolean finalFlush) at System.Web.HttpResponse.Flush(Boolean finalFlush) at System.Web.HttpResponse.Flush() at System.Web.UI.HttpResponseWrapper.System.Web.UI.IHttpResponse.Flush() at System.Web.UI.PageRequestManager.RenderFormCallback(HtmlTextWriter writer, Control containerControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) at System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.HtmlFormWrapper.System.Web.UI.IHtmlForm.RenderControl(HtmlTextWriter writer) at System.Web.UI.PageRequestManager.RenderPageCallback(HtmlTextWriter writer, Control pageControl) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Exception Type:System.Web.HttpException The remote host closed the connection. The error code is 0x80072746. at System.Web.Hosting.ISAPIWorkerRequestInProcForIIS6.FlushCore(Byte[] status, After searching and analyzing I have found that client was disconnected and still I am flushing the response which I am doing for creating PDF files from the stream. To fix this kind of error we can use Response.IsClientConnected property to check whether client is connected or not and then we can flush and end response from client. Here is the sample code to fix that problem. if (Response.IsClientConnected) { Response.Flush(); Response.End(); } That’s it Hope this will help you..Stay tuned for more.. Till that Happy Programming!! Technorati Tags: Exception,ASp.NET

    Read the article

  • Can you disable UNC paths in Windows?

    - by Evan
    We are trying to lock down a Terminal Server, and want to remove a commercial package's ability to accept UNC file paths, ie. paths in the app can then only be entered using the windows drive letters. Is there any way to do this in Windows? Can we disallow UNC paths for just the app? Can we disallow UNC paths for the entire Terminal Server session? The intention is to allow the application to only write to certain directories (as mapped in the Terminal Server session). The aim is to prevent the output of files to directories that the users have access to, but are not mapped in the Terminal Server session.

    Read the article

  • My New BDD Style

    - by Liam McLennan
    I have made a change to my code-based BDD style. I start with a scenario such as: Pre-Editing * Given I am a book editor * And some chapters are locked and some are not * When I view the list of chapters for editing * Then I should see some chapters are editable and are not locked * And I should see some chapters are not editable and are locked and I implement it using a modified SpecUnit base class as: [Concern("Chapter Editing")] public class when_pre_editing_a_chapter : BaseSpec { private User i; // other context variables protected override void Given() { i_am_a_book_editor(); some_chapters_are_locked_and_some_are_not(); } protected override void Do() { i_view_the_list_of_chapters_for_editing(); } private void i_am_a_book_editor() { i = new UserBuilder().WithUsername("me").WithRole(UserRole.BookEditor).Build(); } private void some_chapters_are_locked_and_some_are_not() { } private void i_view_the_list_of_chapters_for_editing() { } [Observation] public void should_see_some_chapters_are_editable_and_are_not_locked() { } [Observation] public void should_see_some_chapters_are_not_editable_and_are_locked() { } } and the output from the specunit report tool is: Chapter Editing specifications    1 context, 2 specifications Chapter Editing, when pre editing a chapter    2 specifications should see some chapters are editable and are not locked should see some chapters are not editable and are locked The intent is to provide a clear mapping from story –> scenarios –> bdd tests.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Audio doesn't work on Windows XP guest (WS 7.0)

    - by Mads
    I can't get audio to work with on a Windows XP guest running on VMware Workstation 7.0 and Ubuntu 9.10 host. Windows fails to produce any audio output and the Windows device manager says the Multimedia Audio Controller is not working properly. Audio is working fine in the host OS. When I open Multimedia Audio Controller properties it says: Device status: The drivers for this device are not installed (Code 28) If I try to reinstall the driver I get the following error message: Cannot Install this Hardware There was a problem installing this hardware: Multimedia Audio Controller An Error occurred during the installation of the device Driver is not intended for this platform Has anyone else experienced this problem?

    Read the article

  • Computer starts, POSTs normal, no video

    - by Annath
    So, I recently tried to start up an old computer of mine. It would not power on, so I replaced the PSU. When I powered it on, all the fans spun up and the POST beeps indicated a normal startup. The problem was there was no video output. Every couple of minutes, I hear the post code again. I think it is restarting after POST, but without video I have no idea why. Does anyone have an idea as to why this is happening, and how to fix it? EDIT: The video card is a PCI-e card. There is no integrated graphics on the motherboard. If I remove the video card, the POST indicates a missing card. When I put it back, it goes back to normal, so I know that it recognizes that the card exists.

    Read the article

  • Can't bind spawn-fcgi to address

    - by Xeoncross
    Following some nice instructions I am almost through setting up PHP to run on nginx. However, every time I try to start spawn-fcgi I get an error message demo@desktop:/usr/bin$ sudo /etc/init.d/php-fastcgi start spawn-fcgi: bind failed: Cannot assign requested address My /etc/init.d/php-fastcgi startup script is: #!/bin/bash PHP_SCRIPT=/usr/bin/php-fastcgi FASTCGI_USER=demo RETVAL=0 case "$1" in start) su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; stop) killall -9 php5-cgi RETVAL=$? ;; restart) killall -9 php5-cgi su - $FASTCGI_USER -c $PHP_SCRIPT RETVAL=$? ;; *) echo "Usage: php-fastcgi {start|stop|restart}" exit 1 ;; esac exit $RETVAL console output which loads /usr/bin/php-fastcgi #!/bin/sh /usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -C 6 -u demo -f /usr/bin/php5-cgi One thing to note is that I am running the PHP cgi as the user "demo" which is my account.

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >