Search Results

Search found 27521 results on 1101 pages for '2012 memory manager kb'.

Page 320/1101 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Is there any memory restrictions on an ASP.Net application? HttpHandler?

    - by tpower
    I have an ASP.Net MVC application that allows users to upload images. When I try to upload a really large file (400MB) I get an error. I assumed that my image processing code (home brew) was very inefficient, so I decided I would try using a third party library to handle the image processing parts. Because I'm using TDD, I wanted to first write a test that fails. But when I test the controller action with the same large file it is able to do all the image processing without any trouble. The error I get is "Out of memory". I'm sure my code is probably using a lot more memory than it needs to but I just want to know why my test passes. The other difference is that I'm using SWFUpload which is not used with the test. Could this be the cause?

    Read the article

  • Hold most of the object in cache/memory insted of database?

    - by feiroox
    Hi All, It just occurred to me why not to have most of the objects in a cache(memory) when an application start. if it's not that large web application. Or to have a settings for how much I want to put in the cache/memory. I just guess it could require to have something like below 1 GB RAM or a lot less. Everything in order to speed up the application even more by not querying database. Is it good idea?

    Read the article

  • Can I have Sinatra / Rack not read the entire request body into memory?

    - by Chris Markle
    Say I have a Sinatra route ala: put '/data' do request.body.read # ... end It appears that the entire request.body is read into memory. Is there a way to consume the body as it comes into the system, rather than having it all buffered in Rack/Sinatra beforehand? I see I can do this to read the body in parts, but the entire body still seems to be read into memory beforehand. put '/data' do while request.body.read(1024) != nil # ... end # ... end

    Read the article

  • Is Memory increase create difference in output or behavior of java application?

    - by Nitz
    Hey Guys I have created one java-swing application. The application runs perfectly runs perfect on my pc. But it doesn't run perfectly on client pc. I had increase my Virtual Memory, earlier on my pc. So my question is.. Is changing memory limit, effect or change application behavior? Is there anything that change the behavior of java application? bcz same application runs perfectly on my pc and same application does not running perfectly on client pc? and there is no problem in code, i have checked three times.

    Read the article

  • Recursive MySQL function call eats up too much memory and dies.

    - by kylex
    I have the following recursive function which works... up until a point. Then the script asks for more memory once the queries exceed about 100, and when I add more memory, the script typically just dies (I end up with a white screen on my browser). public function returnPArray($parent=0,$depth=0,$orderBy = 'showOrder ASC'){ $query = mysql_query("SELECT *, UNIX_TIMESTAMP(lastDate) AS whenTime FROM these_pages WHERE parent = '".$parent."' AND deleted = 'N' ORDER BY ".$orderBy.""); $rows = mysql_num_rows($query); while($row = mysql_fetch_assoc($query)){ // This uses my class and places the content in an array. MyClass::$_navArray[] = array( 'id' => $row['id'], 'parent' => $row['parent'] ); MyClass::returnPArray($row['id'],($depth+1)); } $i++; } Can anyone help me make this query less resource intensive?

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • ORA-4031 Troubleshooting

    - by [email protected]
      QUICKLINK: Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Note 1087773.1 : ORA-4031 Diagnostics Tools [Video]   Have you observed an ORA-04031 error reported in your alert log? An ORA-4031 error is raised when memory is unavailable for use or reuse in the System Global Area (SGA).  The error message will indicate the memory pool getting errors and high level information about what kind of allocation failed and how much memory was unavailable.  The challenge with ORA-4031 analysis is that the error and associated trace is for a "victim" of the problem.   The failing code ran into the memory limitation, but in almost all cases it was not part of the root problem.    Looking for the best way to diagnose? When an ORA-4031 error occurs, a trace file is raised and noted in the alert log if the process experiencing the error is a background process.   User processes may experience errors without reports in the alert log or traces generated.   The V$SHARED_POOL_RESERVED view will show reports of misses for memory over the life of the database. Diagnostics scripts are available in Note 430473.1 to help in analysis of the problem.  There is also a training video on using and interpreting the script data Note 1087773.1. 11g DiagnosabilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4031 occurs. For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file contains vital information about what led to the error condition.  Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]Oracle Configuration Manager (OCM)Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations.Oracle Configuration Manager Quick Start GuideNote 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    Common Causes/Solutions The ORA-4031 can occur for many different reasons.  Some possible causes are: SGA components too small for workload Auto-tuning issues Fragmentation due to application design Bug/leaks in memory allocationsFor more on the 4031 and how this affects the SGA, see Note 396940.1 Troubleshooting and Diagnosing ORA-4031 Error Because of the multiple potential causes, it is important to gather enough diagnostics so that an appropriate solution can be identified.  However, most commonly the cause is associated with configuration tuning.   Ensuring that MEMORY_TARGET or SGA_TARGET are large enough to accommodate workload can get around many scenarios.  The default trace associated with the error provides very high level information about the memory problem and the "victim" that ran into the issue.   The data in the default trace is not going to point to the root cause of the problem. When migrating from 9i to 10g and higher, it is necessary to increase the size of the Shared Pool due to changes in the basic design of the shared memory area. Note 270935.1 Shared pool sizing in 10gNOTE: Diagnostics on the errors should be investigated as close to the time of the error(s) as possible.  If you must restart a database, it is not feasible to diagnose the problem until the database has matured and/or started seeing the problems again. Note 801787.1 Common Cause for ORA-4031 in 10gR2, Excess "KGH: NO ACCESS" Memory Allocation ***For reference to the content in this blog, refer to Note.1088239.1 Master Note for Diagnosing ORA-4031 

    Read the article

  • Oracle Announces Release of PeopleSoft HCM 9.1 Feature Pack 2

    - by Jay Zuckert
    Big things sometimes come in small packages.  Today Oracle announced the availability of PeopleSoft HCM 9.1 Feature Pack 2 which delivers a new HR self service user experience that fundamentally changes the way managers and employees interact with the HCM system.  Earlier this year we reviewed a number of new concept designs with our Customer Advisory Boards.  With the accelerated feature pack development cycle we have adopted, these innovations are  now available to all 9.1 customers without the need for an upgrade.   There are no new products that need to be licensed for the capabilities below. For more details on Feature Pack 2, please see the Oracle press release. Included in Feature Pack 2 is a new search-based menu-free navigation that allows managers to search for employees by name and take actions directly from the secure search results.  For example, a manager can now simply type in part of an employee’s first or last name and receive meaningful results from documents related to performance, compensation, learning, recruiting, career planning and more.   Delivered actions can be initiated directly from these search results and the actions are securely tied to HCM security and user role.  The feature pack also includes new pages that will enable managers to be more productive by aggregating key employee data into a single page.  The new Manager Dashboard and Talent Summary provide a consolidated view of data related to a manager’s team and individual team members, respectively.   The Manager Dashboard displays information relevant to their direct reports including team learning, objective alignment, alerts, and pending approvals requiring their attention.  The Talent Summary provides managers with an aggregated view of talent management-related data for an individual employee including performance history, salary history, succession options, total rewards, and competencies.   The information displayed in both the Manager Dashboard and Talent Summary is configurable by system administrators and can be personalized by each of your managers. Other Feature Pack 2 enhancements allow organizations to administer Matrix or Dotted-Line Relationship Management, which addresses the challenge of tracking and maintaining project-based organizations that cut across the enterprise and geographic regions.  From within the Company Directory and Org Viewer organization charts, managers now have access to manager self-service transactions from related actions.  More than 70 manager and employee self-service transactions have been tied into the related action framework accessible from Org Viewer, Manager Dashboard, Talent Summary and Secure Enterprise Search (SES) results.  In addition to making it easier to access manager self-service transactions, the feature pack delivers streamlined transaction pages making everyday tasks such as promoting an employee faster and more efficient. With the delivery of PeopleSoft HCM 9.1 Feature Pack 2, Oracle continues to deliver on its commitment to our PeopleSoft customers.  With this feature pack, HCM 9.1 customers will be able to deploy the newest functionality quickly, without a major release upgrade, and realize added value from their existing PeopleSoft investment.    For customers newly deploying 9.1, a new download with all of Feature Pack 2  will be available early next year.   This will aslo include recertified upgrade paths from 8.8, 8.9 and 9.0, for customers in the upgrade process.

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • s3cmd fails too many times

    - by alfish
    It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network: root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/ bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 20.95 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.00) WARNING: Waiting 3 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 36864 of 2711541519 0% in 1s 23.96 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.01) WARNING: Waiting 6 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.71 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.05) WARNING: Waiting 9 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 18.86 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=0.25) WARNING: Waiting 12 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 28672 of 2711541519 0% in 1s 15.79 kB/s failed WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe) WARNING: Retrying on lower speed (throttle=1.25) WARNING: Waiting 15 sec... bkup.tgz -> s3://mybucket/bkup.tgz [1 of 1] 12288 of 2711541519 0% in 2s 4.78 kB/s failed ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file. This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1) I appreciate if you suggest some solution or a lightweight alternative to s3cmd. Thanks

    Read the article

  • When importing an Access table into Excel, a look-up column is showing all values as numbers

    - by user3651997
    I have a basic Access to Excel question that has me frustrated. I have two Access 2010 data tables. One is a list of managers. The primary key is a manager ID (which is an autonumber because managers can have the same name), and each row also has manager name, manager email, etc. The second data table is a list of departments. The primary key for each row is a unique department code, and the foreign key is a manager ID (autonumber). I used the Look-up Wizard to create this connection. However, Access does not show the manager ID in the foreign key location. It shows Manager Name like I requested when I used the Look-up Wizard. Now I am trying to import the second table (departments) into Excel 2010. I clicked import from Access, chose the Department table, and everything popped into Excel. BUT, the Manager Name column is showing Manager ID instead. So I have a list of numbers instead of names. How can I make Excel show what I see in Access? Thanks!

    Read the article

  • FreeNAS AFP Doesn't Authenticate

    - by Timothy R. Butler
    I just set up a FreeNAS 8.0.3 server and am trying to use its AFP (Netatalk) service to access it via a Mac OS X Lion system. I created the ZFS volume, set its permissions to include my user in its owner group (and set group write permissions), created an AFP share with AFP3 and told that share to "allow" @uninet (my group). I have a user on the server named tbutler, matching the user on my Mac. I can see the server, "Beatrice," in Finder. When I try to login in Finder using "Connect As...," the user "tbutler" and the proper password, I am returned to the main Finder window with the black bar now saying "Connection Failed." Here's the most recent data from /var/messages on the server, which shows me trying to login both as a "Registered User" and a "Guest": Jul 30 00:29:07 freenas afpd[8972]: AFP3.3 Login by nobody Jul 30 00:29:08 freenas afpd[8972]: AFP logout by nobody Jul 30 00:29:08 freenas afpd[8972]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:08 freenas afpd[8972]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:08 freenas afpd[8972]: AFP statistics: 0.14 KB read, 0.12 KB written Jul 30 00:29:14 freenas afpd[8975]: AFP3.3 Login by tbutler Jul 30 00:29:14 freenas afpd[8975]: AFP logout by tbutler Jul 30 00:29:14 freenas afpd[8975]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:14 freenas afpd[8975]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:14 freenas afpd[8975]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:20 freenas afpd[8978]: AFP3.3 Login by tbutler Jul 30 00:29:20 freenas afpd[8978]: AFP logout by tbutler Jul 30 00:29:20 freenas afpd[8978]: dsi_stream_read: len:0, unexpected EOF Jul 30 00:29:20 freenas afpd[8978]: afp_over_dsi: client logged out, terminating DSI session Jul 30 00:29:20 freenas afpd[8978]: AFP statistics: 0.62 KB read, 0.48 KB written Jul 30 00:29:27 freenas afpd[8983]: AFP3.3 Login by nobody (My clock is clearly not properly set, but be that as it may...) Any suggestions? UPDATE: Apparently this problem occurs if one gives the AFP share a password in the AFP share settings box. When I removed the password and tried to login using a user account again, it worked just fine.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • File Watcher Task

    The task will detect changes to existing files as well as new files, both actions will cause the file to be found when available. A file is available when the task can open it exclusively. This is important for files that take a long time to be written, such as large files, or those that are just written slowly or delivered via a slow network link. It can also be set to look for existing files first (1.2.4.55). The full path of the found file is returned in up to three ways: The ExecValueVariable of the task. This can be set to any String variable. The OutputVariableName when specified. This can be set to any String variable. The FullPath variable within OnFileFoundEvent. This is a File Watcher Task specific event.   Advanced warning of a file having been detected, but not yet available is returned through the OnFileWatcherEvent. This event does not always coincide with the completion of the task, as completion and the OnFileFoundEvent is delayed until the file is ready for use. This event indicates that a file has been detected, and that file will now be monitored until it becomes available. The task will only detect and report on the first file that is created or changes, any subsequent changes will be ignored. Task properties and there usages are documented below: Property Data Type Description Filter String Default filter *.* will watch all files. Standard windows wildcards and patterns can be used to restrict the files monitored. FindExistingFiles Boolean Indicates whether the task should check for any existing files that match the path and filter criteria, before starting the file watcher. IncludeSubdirectories Boolean Indicates whether changes in subdirectories are accepted or ignored. OutputVariableName String The name of the variable into which the full file path found will be written on completion of the task. The variable specified should be of type string. Path String Path to watch for new files or changes to existing files. The path is a directory, not a full filename. For a specific file, enter the file name in the Filter property and the directory in the Path property. PathInputType FileWatcherTask.InputType Three input types are supported for the path: Connection - File connection manager, of type existing folder. Direct Input - Type the path directly into the UI or set on the property as a literal string. Variable – The name of the variable which contains the path. Timeout Integer Time in minutes to wait for a file. If no files are detected within the timeout period the task will fail. The default value of 0 means infinite, and will not expire. TimeoutAsWarning Boolean The default behaviour is to raise an error and fail the task on timeout. This property allows you to suppress the error on timeout, a warning event is raised instead, and the task succeeds. The default value is false.   Installation The task is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the task to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Control Flow Items tab, and then check the File Watcher Task in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Downloads The File Watcher Task  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. File Watcher Task for SQL Server 2005 File Watcher Task for SQL Server 2008 File Watcher Task for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.16 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.14 - Fixed user interface bug. A migration problem caused the UI type editors to reference an old SQL 2005 assembly. (17 Nov 2008) Version 2.0.0.7 - SQL Server 2008 release. (20 Oct 2008) SQL Server 2005 Version 1.2.6.100 - Fixed UI bug with TimeoutAsWarning property not saving correctly. Improved expression support in UI. File availability detection changed to use read-only lock, allowing reduced permissions to be used. Corrected installed issue which prevented installation on 64-bit machines with SSIS runtime only components. (18 Mar 2007) Version 1.2.5.73 - Added TimeoutAsWarning property. Gives the ability to suppress the error on timeout, a warning event is raised instead, and the task succeeds. (Task Version 3) (27 Sep 2006) Version 1.2.4.61 - Fixed a bug which could cause a loop condition with an unexpected exception such as incorrect file permissions. (20 Sep 2006) Version 1.2.4.55 - Added FindExistingFiles property. When true the task will check for an existing file before the file watcher itself actually starts. (Task Version 2) (8 Sep 2006) Version 1.2.3.39 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Property type validation improved. (12 Jun 2006) Version 1.2.1.0 - SQL Server 2005 IDW 16 Sept CTP. Futher UI enhancements, including expression indicator. Fixed bug caused by execution within loop Subsequent iterations detected the same file as the first iteration. Added IncludeSubdirectories property. Fixed bug when changes made in subdirectories, and folder change was detected, causing task failure. (Task Version 1) (6 Oct 2005) Version 1.2.0.0 - SQL Server 2005 IDW 15 June CTP. Changes made include an enhanced UI, the PathInputType property for greater flexibility with path input, the OutputVariableName property, and the new OnFileFoundEvent event. (7 Sep 2005) Version 1.1.2 - Public Release (16 Nov 2004) Screenshots   Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005 and SQL Server 2008. If you an error when you try and use the task along the lines of The task with the name "File Watcher Task" and the creation name ... is not registered for use on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This cache is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. The full error message is shown below for reference: TITLE: Microsoft Visual Studio ------------------------------ The task with the name "File Watcher Task" and the creation name "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b" is not registered for use on this computer. Contact Information: File Watcher Task A similar error message can be shown when trying to edit the task if the Microsoft Exception Message Box is not installed. This useful component is installed as part of the SQL Server Management Studio tools but occasionally due to the custom options chosen during SQL Server 2005 setup it may be absent. If you get an error like Could not load file or assembly 'Microsoft.ExceptionMessageBox.. you can manually download and install the missing component. It is available as part of the Feature Pack for SQL Server 2005 release. The feature packs are occasionally updated by Microsoft so you may like to check for a more recent edition, but you can find the Microsoft Exception Message Box download links here - Feature Pack for Microsoft SQL Server 2005 - April 2006 If you encounter this problem on SQL Server 2008, please check that you have installed the SQL Server client components. The component is no longer available as a separate download for SQL Server 2008  as noted in the Microsoft documentation for Deploying an Exception Message Box Application The full error message is shown below for reference, although note that the Version will change between SQL Server 2005 and SQL Server 2008: TITLE: Microsoft Visual Studio ------------------------------ Cannot show the editor for this task. ------------------------------ ADDITIONAL INFORMATION: Could not load file or assembly 'Microsoft.ExceptionMessageBox, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. (Konesans.Dts.Tasks.FileWatcherTask) Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed. Sample Code If you wanted to use the task programmatically then here is some sample code for creating a basic package and configuring the task. It uses a variable to supply the path to watch, and also sets a variable for the OutputVariableName. Once execution is complete it writes out the file found to the console. /// <summary> /// Create a package with an File Watcher Task /// </summary> public void FileWatcherTaskBasic() { // Create the package Package package = new Package(); package.Name = "FileWatcherTaskBasic"; // Add variable for input path, the folder to look in package.Variables.Add("InputPath", false, "User", @"C:\Temp\"); // Add variable for the file found, to be used on OutputVariableName property package.Variables.Add("FileFound", false, "User", "EMPTY"); // Add the Task package.Executables.Add("Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask, " + "Konesans.Dts.Tasks.FileWatcherTask, Version=1.2.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set basic properties taskHost.Properties["PathInputType"].SetValue(taskHost, 1); // InputType.Variable taskHost.Properties["Path"].SetValue(taskHost, "User::InputPath"); taskHost.Properties["OutputVariableName"].SetValue(taskHost, "User::FileFound"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif // Display variable value before execution to check EMPTY Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Execute package package.Execute(); // Display variable value after execution, e.g. C:\Temp\File.txt Console.WriteLine("Result Variable: {0}", package.Variables["User::FileFound"].Value); // Perform simple check for execution errors if (package.Errors.Count > 0) foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } else Console.WriteLine("Success - {0}", package.Name); // Clean-up package.Dispose(); } (Updated installation and troubleshooting sections, and added sample code July 2009)

    Read the article

  • Why does tracerpt use up all of my Sql Server's memory?

    - by Cypher
    We have a MS Sql Server 2008 machine with 12 GB of RAM... twice now within the last week this server was knocked on its backside by a process called "tracerpt.exe" which was found to have taken up ALL of the system's memory and leaving nothing for sqlserver. Done my homework, figured out what this program is... but still no idea why it's hogging up so much RAM (though I have an idea), nor what application is actually executing it. This server is the back-end to a Microsoft Dynamics CRM 4.0 application which is hosted on a separate server and is our production database used for just about everything. If this program is necessary, I would like to be able to find the application that is executing this thing and remove it or disable whatever feature is causing this quite annoying occurrence. Any ideas?

    Read the article

  • Website with large number of users keeps going down due to memory leaks:Tomcat6 and Java 6

    - by user1766478
    We host many websites on one of our two virtual servers. We use tomcat 6 and Java 6. It is an MVC model with a hibernate like layer. The problem is, one of our biggest clients with the most number of members, keeps crashing the server every 6-8 hours(precisely in the mornings when most members login) We have been having this issue for 4 days now. Trying to figure out the problem but we suspect memory leaks. Any suggestions?

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Word 2007 "Out of Memory or Disk Space" Error on launch.

    - by Adam
    Word 2007 is installed on a Vista Home Premium machine and whenever it starts up it opens what appears to be a dynamic installer to do something and then throws up the "Out of Memory or Disk Space" error. Word 2007 never completes starting up. Reinstalling Word hasn't helped and if I can avoid reinstalling Windows until Windows 7 is released and get Word working in the mean time, that would be ideal. I've been looking around for a solution, once of which seemed to point to a problem with the user account. I created a second user on the machine and Word still had the same problem. The other solution that seems possible is a corrupted normal.dot/normal.dotm file. However, even in the location it should be, I can't seem to find it. Am I going in the right direction with this? Is there another solution I haven't come across that will fix this? If it is possible that renaming normal.dot/normal.dotm how can I find it?

    Read the article

  • HP p410i array controller - what happens if i add memory?

    - by James
    I have a p410i array controller that only has 256ram. We want to create a raid 5 so we have procured a 512 write back cache module. If we install the write back cache, will this erase the existing raid information. The server currently has 2 disks in raid 1. 6 are spare waiting for an upgrade to create a raid 5. the concern is if we replace/upgrade the memory for the controller, we will wipe the existing production raid 1 array. Thanks in advance.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >