Search Results

Search found 5861 results on 235 pages for 'ssis reporting pack'.

Page 35/235 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • sensors reporting weird temperatures

    - by Felix
    lm-sensors is reporting weird temps for me: $ sensors coretemp-isa-0000 Adapter: ISA adapter Core 0: +38.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +32.0°C (high = +72.0°C, crit = +100.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +42.0°C (high = +72.0°C, crit = +100.0°C) w83627dhg-isa-0290 Adapter: ISA adapter Vcore: +1.10 V (min = +0.00 V, max = +1.74 V) in1: +1.62 V (min = +0.06 V, max = +0.17 V) ALARM AVCC: +3.34 V (min = +2.98 V, max = +3.63 V) VCC: +3.34 V (min = +2.98 V, max = +3.63 V) in4: +1.83 V (min = +1.30 V, max = +1.15 V) ALARM in5: +1.26 V (min = +0.83 V, max = +1.03 V) ALARM in6: +0.11 V (min = +1.22 V, max = +0.56 V) ALARM 3VSB: +3.30 V (min = +2.98 V, max = +3.63 V) Vbat: +3.18 V (min = +2.70 V, max = +3.30 V) fan1: 0 RPM (min = 0 RPM, div = 128) ALARM fan2: 1117 RPM (min = 860 RPM, div = 8) fan3: 0 RPM (min = 10546 RPM, div = 128) ALARM fan4: 0 RPM (min = 10546 RPM, div = 128) ALARM fan5: 0 RPM (min = 10546 RPM, div = 128) ALARM temp1: +88.0°C (high = +20.0°C, hyst = +4.0°C) ALARM sensor = diode temp2: +25.0°C (high = +80.0°C, hyst = +75.0°C) sensor = diode temp3: +121.5°C (high = +80.0°C, hyst = +75.0°C) ALARM sensor = thermistor cpu0_vid: +2.050 V Please note temp3. How can I know what temp3 is, and why it is so high? The system is really stable (which I guess it wouldn't be at those temps). Also, note the really decent core temps, which suggest a healthy system as well. My guess is that the readout is wrong. On another computer it reported temperatures below 0 degrees centigrade, which was not possible, considering the environment temperature of ~22-24. Is this some known bug/issue? Should I try some Windows programs (like CPU-Z) and see they give similar results?

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • Report Books and Parameters with Telerik Reporting

    Telerik Reporting provides a simple, yet powerful, component called the ReportBook that allows multiple reports to be combined into one. Doing so makes displaying, printing, and exporting multiple reports a much simpler task for end users. Providing this type of functionality does leave one question though, "How are report parameters handled?" This entry will focus on providing the answers to that simple question. Click here to download the sample code so you can follow along. Creating a ReportBook ReportBooks are supported in all three environments the ReportViewer supports, WinForms, Silverlight, and ASP.NET. Adding a ReportBook to each of these environments is very similar. This example focuses on using a ReportBook with the WinForms ReportViewer. Create a WinForms Application Add a reference to the project containing the reports. Drag a ReportViewer from the ToolBox into the designer. Drag a ReportBook from the ToolBox into the designer. A dialog will be displayed asking for the reports to be included in the ReportBook....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • 2013 U.S. GAAP Financial Reporting Taxonomy Available for Public Review and Comment

    - by Theresa Hickman
    FASB recently released the proposed 2013 U.S. GAAP Reporting Taxonomy. Comments are due October 29, 2012 to be finalized and published early 2013.  The proposed 2013 U.S. GAAP taxonomy and instructions on how to submit comments are available at the FASB’s XBRL page. In previous blog entries, I talked about how Oracle Hyperion Disclosure Management supports the latest taxonomy, enabling financial managers to easily comply with the latest filing requirements. The taxonomy is a list of computer-readable tags in XBRL that allows companies to annotate the voluminous financial data that is included in typical long-form financial statements and related footnote disclosures. The tags allow computers to automatically search for, assemble, and process data so it can be readily accessed and analyzed by investors, analysts, journalists, and regulators. You do not have to have Oracle Hyperion Financial Management, used for consolidating financial results, to generate XBRL. You just need Oracle Hyperion Disclosure Management to generate XBRL instance documents from financial applications, such as Oracle E-Business Suite, Oracle PeopleSoft, Oracle JD Edwards EnterpriseOne, and Oracle Fusion General Ledger. To generate XBRL tags and complete SEC filings using your existing financial applications with Oracle Hyperion Disclosure Management, here are the steps: Download the XBRL taxonomy from the SEC or XBRL Website into Hyperion Disclosure Management to create a company taxonomy. Publish financial statements from the general ledger to Microsoft Excel or Microsoft Word. Create the SEC filing in the Microsoft programs and perform the XBRL tag mapping in Oracle Hyperion Disclosure Management. Ensure that the SEC filing meets XBRL and SEC EDGAR Filer Manual validation requirements. Validate and submit the company taxonomy and XBRL instance document to the SEC. Get more details about Oracle Hyperion Disclosure Management.

    Read the article

  • CUOA Workshop: “L’evoluzione dei modelli e dei sistemi di analisi e reporting direzionale”

    - by Paolo Leveghi
    Il 26 Giugno scorso presso l’Aula Magna della Fondazione CUOA si è tentuo il workshop “L’evoluzione dei modelli e dei sistemi di analisi e reporting direzionale”, promosso dal Club Finance CUOA con la collaborazione di Oracle .  Una recente analisi di Gartner ha evidenziato che Executives Finance ed IT hanno identificato Business Intelligence, Analytics e Performance Management come tema prioritario su cui concentrare gli investimenti “tecnologici” nel 2013 confermando, se mai qualcuno ne avesse avuto bisogno, la continua e crescente attenzione delle aziende alla propria capacità di definire, produrre e gestire informazioni funzionali all’identificazione di opportunità, alla valutazione di rischi ed impatti, alla simulazione degli effetti di operazioni ordinarie e straordinarie...in un solo concetto, informazioni funzionali alla guida dell’azienda. Questo dato è ancora più interessante se incrociato con il risultato di una survey condotta da CFO Magazine e KPMG International nella quale il 48% dei CFO intervistati ha dichiarato di ritenere i propri sistemi datati ed poco flessibili, quindi un limite, se non proprio un freno, alla loro volontà di essere agenti del cambiamento. A fronte di tutto questo, le aziende dimostrano un crescente interesse a capire cosa fare, oggi e domani, per migliorare la propria capacità di sfruttare una risorsa aziendale estremamente pregiata quale l’informazione. L’    Obiettivo del workshop è quindi stato quello di analizzare in quale scenario stanno operando oggi le aziende del Nord Est Italiano verificando, grazie alle testimonianze di ITAL TBS e del Gruppo Carraro,  lo “stato di salute” dei processi di Business Analytics, il livello di cultura aziendale ed il grado di adozione di soluzioni da parte dei CFO, del Management e più in generale dei decisori aziendali, i percorsi evolutivi e prospettici per migliorare su questi temi

    Read the article

  • Self Service Reporting With PowerPivot

    - by blakmk
    There are so many cool new features in Sql 2008 release 2 it was difficult for me to pick a topic for T-SQL Tuesday . But the one that I am now a secret fan of, I once resented for its creation. Let me explain, for years I have encountered reporting systems cobbled together in tools like Access and Excel built by "database hobbyists" who had no formal training in database design or best practices. They would take their monstrosities as far as they could go before ultimatley it stopped working or the person that wrote it left the company. At that point it would become the resident DBA's problem to support it as a Live application. So when I first heard of Power Pivot, a sense of Deja Vu overtook me and I felt like the guy in the Ausin Powers movie , knowing the inevitable is coming but somehow unsure how to get out of the way. But when I eventually saw it in action, I quickly realised that it is a very powerful tool. It has a much smaller "time to market" than traditional BI architectures. Combined with the new features of Excel, some pretty impressive dashboards can be produced.Of course PowerPivot is not a magic bullet and along with potential scalability issues there are the usual issues such as master data management and data quality that cannot be overcome easily with power pivot. As a tool though, it has potential. Traditional BI is expensive, both in terms of time and the amount of resources it takes to deliver the system. The time lag between an analyst or a commercial accountant requesting reports and the report being delivered can make a huge commercial difference. I have observed companies where empowered end users become extremely productive when allowed to plough in to various disperate datasets. It may not be the correct way or the most sustainable but its cheap and quick. In these times when budgets are being slashed and we are forced to deliver more with less, why not empower the end user in a tool that is designed for exactly this task.... @blakmk  

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Problem with XLSX file on Office 2003 with Compatibility Pack

    - by MadBoy
    I've this machine which has Office 2003 installed with Compatibility Pack. User received one file which she has to work with in XLSX format (file has to stay in that format so options to save it as XLS can be skipped). When she opens the file from Desktop or any other location it gives an error like "Cannot find the file in following location" and in the background it starts Converting XLSX file to XLS. In the end Excel is opened up with some random file name X000008.xls (Read Only) which is just 1 to 1 conversion of the XLSX document. However if I go directly to Excel and use File / Open and try to open the XLSX file no error or conversion is done. The file is writable and can be saved in XLSX format. The file name is simple like This_file something.XLSX, I've even tried to remove all spaces but it gave no better results. Anyone has any recommendations? So far I have done: 1. Uninstalled Compatibility Pack and installed it again. Tried opening it without any other updates and the error still pops out. In progress: I am now running all possible updates of office (like 5 of them) which i suppos won't fix anything (as they were applied when I've uninstalled compatibility pack). Next in run will be (not yet done) installing Windows XP SP3. What other things I can do?

    Read the article

  • What permissions needed to connect to SQL Server Integration Services

    - by rwmnau
    I need to allow a consultant to connect to SSIS on a SQL Server 2008 box without making him a local administrator. If I add him to the local administrators group, he can connect to SSIS just fine, but it seems that I can't grant him enough permissions through SQL Server to give him these rights without being a local admin. I've added him to every role on the server, every database role in MSDB shy of DBO, and he's still not able to connect. I don't see any SSIS-related Windows groups on the server - Is membership in the Local Administrators group really required to connect to the SSIS instance on a SQL Server? It seems like there is somewhere I should be able to grant "SSIS Admin" rights to a user (even if it's a Windows account and not a SQL account), but I can't find that place. UPDATE: I've found an MSDN article (See the section titled "Eliminating the 'Access if Denied' Error") that describes how to resolve problem, but even after following the stepsI'm still not able to connect. Just wanted to add it to the discussion

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

  • Speaking at SQL Saturday #39 in NYC!

    - by andyleonard
    I am honored to present Applied SSIS Design Patterns and Introduction to Incremental Loads at SQL Saturday #39 in New York City! If you're there and you read this blog, be sure to stop by and introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Windows Update can't install Windows Vista SP1

    - by Harry Johnston
    If you install Windows Vista RTM and run Windows Update, many updates are offered and will successfully install. Once all other updates are installed, Windows Vista service pack 1 is offered. When you attempt to install Windows Vista service pack 1, the service pack installation wizard appears, presenting the license agreement and so on. However, shortly after the installation starts the wizard disappears. Windows Update says that the update was installed successfully. However, service pack 1 is not in fact installed, and will be detected as needed again on the next update check. Repeat ad nauseum. On checking the Windows Update log, error 0x80190194 appears near the beginning of an update check, associated with the URL http://update.microsoft.com/vista/windowsupdate/redir/vistawuredir.cab. Why won't service pack 1 install properly and how do I fix it?

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • How to update all the SSIS packages&rsquo; Connection Managers in a BIDS project with PowerShell

    - by Luca Zavarella
    During the development of a BI solution, we all know that 80% of the time is spent during the ETL (Extract, Transform, Load) phase. If you use the BI Stack Tool provided by Microsoft SQL Server, this step is accomplished by the development of n Integration Services (SSIS) packages. In general, the number of packages made ??in the ETL phase for a non-trivial solution of BI is quite significant. An SSIS package, therefore, extracts data from a source, it "hammers" :) the data and then transfers it to a specific destination. Very often it happens that the connection to the source data is the same for all packages. Using Integration Services, this results in having the same Connection Manager (perhaps with the same name) for all packages: The source data of my BI solution comes from an Helper database (HLP), then, for each package tha import this data, I have the HLP Connection Manager (the use of a Shared Data Source is not recommended, because the Connection String is wired and therefore you have to open the SSIS project and use the proper wizard change it...). In order to change the HLP Connection String at runtime, we could use the Package Configuration, or we could run our packages with DTLoggedExec by Davide Mauri (a must-have if you are developing with SQL Server 2005/2008). But my need was to change all the HLP connections in all packages within the SSIS Visual Studio project, because I had to version them through Team Foundation Server (TFS). A good scribe with a lot of patience should have changed by hand all the connections by double-clicking the HLP Connection Manager of each package, and then changing the referenced server/database: Not being endowed with such virtues :) I took just a little of time to write a small script in PowerShell, using the fact that a SSIS package (a .dtsx file) is nothing but an xml file, and therefore can be changed quite easily. I'm not a guru of PowerShell, but I managed more or less to put together the following lines of code: $LeftDelimiterString = "Initial Catalog=" $RightDelimiterString = ";Provider=" $ToBeReplacedString = "AstarteToBeReplaced" $ReplacingString = "AstarteReplacing" $MainFolder = "C:\MySSISPackagesFolder" $files = get-childitem "$MainFolder" *.dtsx `       | Where-Object {!($_.PSIsContainer)} foreach ($file in $files) {       (Get-Content $file.FullName) `             | % {$_ -replace "($LeftDelimiterString)($ToBeReplacedString)($RightDelimiterString)", "`$1$ReplacingString`$3"} ` | Set-Content $file.FullName; } The script above just opens any SSIS package (.dtsx) in the supplied folder, then for each of them goes in search of the following text: Initial Catalog=AstarteToBeReplaced;Provider= and it replaces the text found with this: Initial Catalog=AstarteReplacing;Provider= I don’t enter into the details of each cmdlet used. I leave the reader to search for these details. Alternatively, you can use a specific object model exposed in some .NET assemblies provided by Integration Services, or you can use the Pacman utility: Enjoy! :) P.S. Using TFS as versioning system, before running the script I checked out the packages and, after the script executed succesfully, I checked in them.

    Read the article

  • Option In The Html Agility Pack That Parse From The Tag `&lt table &lt`

    - by Harikrishna
    Is there any option in the html agility pack that can parse the tag which is like in the &lt and &gt. If there is tag like <table> then html agility pack parse the information from the tag table properly.But if the tag is like &lt table &lt then it does not parse the information from the tag table here. So any option is there in the html agility pack that parse information from such tags also.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • HTML Agility Pack

    - by Harikrishna
    I want to parse the html table using html agility pack. I want to extract only some predefined column data from the table. But I am new to parsing and html agility pack and I have tried but I don't know how to use the html agility pack for my need. If anybody knows then give me example if possible.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >