Search Results

Search found 1968 results on 79 pages for 'pickle dump'.

Page 70/79 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • WebLogic Server Performance and Tuning: Part I - Tuning JVM

    - by Gokhan Gungor
    Each WebLogic Server instance runs in its own dedicated Java Virtual Machine (JVM) which is their runtime environment. Every Admin Server in any domain executes within a JVM. The same also applies for Managed Servers. WebLogic Server can be used for a wide variety of applications and services which uses the same runtime environment and resources. Oracle WebLogic ships with 2 different JVM, HotSpot and JRocket but you can choose which JVM you want to use. JVM is designed to optimize itself however it also provides some startup options to make small changes. There are default values for its memory and garbage collection. In real world, you will not want to stick with the default values provided by the JVM rather want to customize these values based on your applications which can produce large gains in performance by making small changes with the JVM parameters. We can tell the garbage collector how to delete garbage and we can also tell JVM how much space to allocate for each generation (of java Objects) or for heap. Remember during the garbage collection no other process is executed within the JVM or runtime, which is called STOP THE WORLD which can affect the overall throughput. Each JVM has its own memory segment called Heap Memory which is the storage for java Objects. These objects can be grouped based on their age like young generation (recently created objects) or old generation (surviving objects that have lived to some extent), etc. A java object is considered garbage when it can no longer be reached from anywhere in the running program. Each generation has its own memory segment within the heap. When this segment gets full, garbage collector deletes all the objects that are marked as garbage to create space. When the old generation space gets full, the JVM performs a major collection to remove the unused objects and reclaim their space. A major garbage collect takes a significant amount of time and can affect system performance. When we create a managed server either on the same machine or on remote machine it gets its initial startup parameters from $DOMAIN_HOME/bin/setDomainEnv.sh/cmd file. By default two parameters are set:     Xms: The initial heapsize     Xmx: The max heapsize Try to set equal initial and max heapsize. The startup time can be a little longer but for long running applications it will provide a better performance. When we set -Xms512m -Xmx1024m, the physical heap size will be 512m. This means that there are pages of memory (in the state of the 512m) that the JVM does not explicitly control. It will be controlled by OS which could be reserve for the other tasks. In this case, it is an advantage if the JVM claims the entire memory at once and try not to spend time to extend when more memory is needed. Also you can use -XX:MaxPermSize (Maximum size of the permanent generation) option for Sun JVM. You should adjust the size accordingly if your application dynamically load and unload a lot of classes in order to optimize the performance. You can set the JVM options/heap size from the following places:     Through the Admin console, in the Server start tab     In the startManagedWeblogic script for the managed servers     $DOMAIN_HOME/bin/startManagedWebLogic.sh/cmd     JAVA_OPTIONS="-Xms1024m -Xmx1024m" ${JAVA_OPTIONS}     In the setDomainEnv script for the managed servers and admin server (domain wide)     USER_MEM_ARGS="-Xms1024m -Xmx1024m" When there is free memory available in the heap but it is too fragmented and not contiguously located to store the object or when there is actually insufficient memory we can get java.lang.OutOfMemoryError. We should create Thread Dump and analyze if that is possible in case of such error. The second option we can use to produce higher throughput is to garbage collection. We can roughly divide GC algorithms into 2 categories: parallel and concurrent. Parallel GC stops the execution of all the application and performs the full GC, this generally provides better throughput but also high latency using all the CPU resources during GC. Concurrent GC on the other hand, produces low latency but also low throughput since it performs GC while application executes. The JRockit JVM provides some useful command-line parameters that to control of its GC scheme like -XgcPrio command-line parameter which takes the following options; XgcPrio:pausetime (To minimize latency, parallel GC) XgcPrio:throughput (To minimize throughput, concurrent GC ) XgcPrio:deterministic (To guarantee maximum pause time, for real time systems) Sun JVM has similar parameters (like  -XX:UseParallelGC or -XX:+UseConcMarkSweepGC) to control its GC scheme. We can add -verbosegc -XX:+PrintGCDetails to monitor indications of a problem with garbage collection. Try configuring JVM’s of all managed servers to execute in -server mode to ensure that it is optimized for a server-side production environment.

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • WebLogic Server JMS WLST Script – Who is Connected To My Server

    - by james.bayer
    Ever want to know who was connected to your WebLogic Server instance for troubleshooting?  An email exchange about this topic and JMS came up this week, and I’ve heard it come up once or twice before too.  Sometimes it’s interesting or helpful to know the list of JMS clients (IP Addresses, JMS Destinations, message counts) that are connected to a particular JMS server.  This can be helpful for troubleshooting.  Tom Barnes from the WebLogic Server JMS team provided some helpful advice: The JMS connection runtime mbean has “getHostAddress”, which returns the host address of the connecting client JVM as a string.  A connection runtime can contain session runtimes, which in turn can contain consumer runtimes.  The consumer runtime, in turn has a “getDestinationName” and “getMemberDestinationName”.  I think that this means you could write a WLST script, for example, to dump all consumers, their destinations, plus their parent session’s parent connection’s host addresses.    Note that the client runtime mbeans (connection, session, and consumer) won’t necessarily be hosted on the same JVM as a destination that’s in the same cluster (client messages route from their connection host to their ultimate destination in the same cluster). Writing the Script So armed with this information, I decided to take the challenge and see if I could write a WLST script to do this.  It’s always helpful to have the WebLogic Server MBean Reference handy for activities like this.  This one is focused on JMS Consumers and I only took a subset of the information available, but it could be modified easily to do Producers.  I haven’t tried this on a more complex environment, but it works in my simple sandbox case, so it should give you the general idea. # Better to use Secure Config File approach for login as shown here http://buttso.blogspot.com/2011/02/using-secure-config-files-with-weblogic.html connect('weblogic','welcome1','t3://localhost:7001')   # Navigate to the Server Runtime and get the Server Name serverRuntime() serverName = cmo.getName()   # Multiple JMS Servers could be hosted by a single WLS server cd('JMSRuntime/' + serverName + '.jms' ) jmsServers=cmo.getJMSServers()   # Find the list of all JMSServers for this server namesOfJMSServers = '' for jmsServer in jmsServers: namesOfJMSServers = jmsServer.getName() + ' '   # Count the number of connections jmsConnections=cmo.getConnections() print str(len(jmsConnections)) + ' JMS Connections found for ' + serverName + ' with JMSServers ' + namesOfJMSServers   # Recurse the MBean tree for each connection and pull out some information about consumers for jmsConnection in jmsConnections: try: print 'JMS Connection:' print ' Host Address = ' + jmsConnection.getHostAddress() print ' ClientID = ' + str( jmsConnection.getClientID() ) print ' Sessions Current = ' + str( jmsConnection.getSessionsCurrentCount() ) jmsSessions = jmsConnection.getSessions() for jmsSession in jmsSessions: jmsConsumers = jmsSession.getConsumers() for jmsConsumer in jmsConsumers: print ' Consumer:' print ' Name = ' + jmsConsumer.getName() print ' Messages Received = ' + str(jmsConsumer.getMessagesReceivedCount()) print ' Member Destination Name = ' + jmsConsumer.getMemberDestinationName() except: print 'Error retrieving JMS Consumer Information' dumpStack() # Cleanup disconnect() exit() Example Output I expect the output to look something like this and loop through all the connections, this is just the first one: 1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection:   Host Address = 127.0.0.1   ClientID = None   Sessions Current = 16    Consumer:      Name = consumer40      Messages Received = 1      Member Destination Name = myJMSModule!myQueue Notice that it has the IP Address of the client.  There are 16 Sessions open because I’m using an MDB, which defaults to 16 connections, so this matches what I expect.  Let’s see what the full output actually looks like: D:\Oracle\fmw11gr1ps3\user_projects\domains\offline_domain>java weblogic.WLST d:\temp\jms.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'offline_domain'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to serverRuntime tree. This is a read-only tree with ServerRuntimeMBean as the root. For more help, use help(serverRuntime)   1 JMS Connections found for AdminServer with JMSServers myJMSServer JMS Connection: Host Address = 127.0.0.1 ClientID = None Sessions Current = 16 Consumer: Name = consumer40 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer34 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer37 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer16 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer46 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer49 Messages Received = 2 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer43 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer55 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer25 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer22 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer19 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer52 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer31 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer58 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer28 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Consumer: Name = consumer61 Messages Received = 1 Member Destination Name = myJMSModule!myQueue Disconnected from weblogic server: AdminServer     Exiting WebLogic Scripting Tool. Thanks to Tom Barnes for the hints and the inspiration to write this up. Image of telephone switchboard courtesy of http://www.JoeTourist.net/ JoeTourist InfoSystems

    Read the article

  • Cloud Computing Architecture Patterns: Don’t Focus on the Client

    - by BuckWoody
    Normally I try to put topics in the positive in other words "Do this" not "Don't do that". Sometimes its clearer to focus on what *not* to do. Popular development processes often start with screen mockups, or user input descriptions. In a scale-out pattern like Cloud Computing on Windows Azure, that's the wrong place to start. Start with the Data    Instead, I recommend that you start with the data that a process requires. That data might be temporary or persisted, but starting with the data and its requirements helps to define not only the storage engine you need but also drives everything from security to the integrity of the application. For instance, assume the requirements show that the user must enter their phone number, and that this datum is used in a contact management system further down the application chain. For that datum, you can determine what data type you need (U.S. only or International?) the security requirements, whether it needs ACID compliance, how it will be searched, indexed and so on. From one small data point you can extrapolate out your options for storing and processing the data. Here's the interesting part, which begins to break the patterns that we've used for decades: all of the data doesn't have the same requirements. The phone number might be best suited for a list, or an element, or a string, with either BASE or ACID requirements, based on how it is used. That means we don't have to dump everything into XML, an RDBMS, a NoSQL engine, or a flat file exclusively. In fact, one record might use all of those depending on the use-case requirements. Next Is Data Management  With the data defined, we can move on to how to store the data. Again, the requirements now dictate whether we need a full relational calculus or set-based operations, or we can choose another method based on the requirements for the data. And breaking another pattern its OK to store in more than once, in more than one location. We do this all the time for reporting systems and Business Intelligence systems, so this is a pattern we need to think about even for OLTP data. Move to Data Transport How does the data get around? We can use a connection-based method, sending the data along a transport to the storage engine, but in some cases we may want to use a cache, a queue, the Service Bus, or Complex Event Processing. Finally, Data Processing Most RDBMS engines, NoSQL, and certainly Big Data engines not only store data, but can process and manipulate it as well. Its doubtful that you'll calculate that phone number right? Well, if you're the phone company, you most certainly will. And so we see that even once we've chosen the data type, storage and engine, the same element can have different computing requirements based on how it is used. Sure, We Need A Front-End At Some Point Not all data is entered by human hands in fact most data isn't. We don't really need a Graphical User Interface (GUI) we need some way for a GUI to get data into and out of the systems listed earlier.   But when we do need to allow users to enter or examine data, that should be left to the GUI that best fits the device the user has. Ever tried to use an application designed for a web browser on a phone? Or one designed for a tablet on a phone? Its usually quite painful. The siren song of "We'll just write one interface for all devices" is strong, and has beguiled many an unsuspecting architect. But they just don't work out.   Instead, focus on the data, its transport and processing. Create API calls or a message system that allows for resilient transport to the device or interface, and let it do what it does best. References Microsoft Architecture Journal:   http://msdn.microsoft.com/en-us/architecture/bb410935.aspx Patterns and Practices:   http://msdn.microsoft.com/en-us/library/ff921345.aspx Windows Azure iOS, Android, Windows 8 Mobile Devices SDK: http://www.windowsazure.com/en-us/develop/mobile/tutorials/get-started-ios/ Windows Azure Facebook SDK: http://ntotten.com/2013/03/14/using-windows-azure-mobile-services-with-the-facebook-sdk-for-windows-phone/

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Swap partition not recognized (The disk drive with UUID=... is not ready yet or not present)

    - by ladaghini
    I think I had an encrypted swap partition, because I chose to encrypt my home directory during the installation. I believe that's what the line with /dev/mapper/cryptswap1 ... in my /etc/fstab is all about. I did something to bork my swap because on the next boot, I got a message (paraphrased): The disk drive for /dev/mapper/cryptswap1 is not ready yet or not present. Wait to continue. Press S to skip or M to manually recover. (As a side note, pressing S or M seemed to do nothing different from just waiting.) Here's what I've tried: This tutorial on how to fix the swap partition not mounting. However, in the above, the mkswap command fails because the device is busy. So I booted from a live USB, ran GParted to reformat the swap partition (which showed up as an unknown fs type), and chrooted into the broken system to try that tutorial again. I also adjusted /etc/initramfs-tools/conf.d/resume and /etc/fstab to reflect the new UUID generated from formatting the partition as a swap. That still didn't work; instead of /dev/mapper/cryptswap1 not present, "The disk drive with UUID=[swap partition's UUID] is not ready yet or not present..." So I decided to start afresh as though I never had created a swap partition in the first place. From the Live USB, I deleted the swap partition altogether (which, again showed up in GParted as an unknown fs type), removed the swap and cryptswap entries in /etc/fstab as well as removed /etc/initramfs-tools/conf.d/resume and /etc/crypttab. At this point the main system shouldn't be considered broken because there is no swap partition and no instructions to mount one, right? I didn't get any errors during startup. I followed the same instructions to create and encrypt the swap partition, starting with creating a partition for the swap, though I think fdisk said a reboot was necessary to see changes. I was confident the 3rd process above would work, but the problem yet persists. Some relevant info (/dev/sda8 is the swap partition): /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=4c11e82c-5fe9-49d5-92d9-cdaa6865c991 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=4031413e-e89f-49a9-b54c-e887286bb15e /boot ext4 defaults 0 2 # /home was on /dev/sda7 during installation UUID=d5bbfc6f-482a-464e-9f26-fd213230ae84 /home ext4 defaults 0 2 # swap was on /dev/sda8 during installation UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 output of sudo mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /boot type ext4 (rw) /dev/sda7 on /home type ext4 (rw) /home/undisclosed/.Private on /home/undisclosed type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=cbae1771abd34009,ecryptfs_fnek_sig=7cefe2f59aab8e58) gvfs-fuse-daemon on /home/undisclosed/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=undisclosed) output of sudo blkid (note that /dev/sda8 is missing): /dev/sda1: LABEL="SYSTEM" UUID="960490E80490CC9D" TYPE="ntfs" /dev/sda2: UUID="D4043140043126C0" TYPE="ntfs" /dev/sda3: LABEL="Shared" UUID="80F613E1F613D5EE" TYPE="ntfs" /dev/sda5: UUID="4031413e-e89f-49a9-b54c-e887286bb15e" TYPE="ext4" /dev/sda6: UUID="4c11e82c-5fe9-49d5-92d9-cdaa6865c991" TYPE="ext4" /dev/sda7: UUID="d5bbfc6f-482a-464e-9f26-fd213230ae84" TYPE="ext4" /dev/mapper/cryptswap1: UUID="41fa147a-3e2c-4e61-b29b-3f240fffbba0" TYPE="swap" output of sudo fdisk -l: Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdec3fed2 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 210135039 104862720 7 HPFS/NTFS/exFAT /dev/sda3 210135040 415422463 102643712 7 HPFS/NTFS/exFAT /dev/sda4 415424510 625141759 104858625 5 Extended /dev/sda5 415424512 415922175 248832 83 Linux /dev/sda6 415924224 515921919 49998848 83 Linux /dev/sda7 515923968 621389823 52732928 83 Linux /dev/sda8 621391872 625141759 1874944 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 1919 MB, 1919942656 bytes 255 heads, 63 sectors/track, 233 cylinders, total 3749888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf5321b5 /etc/initramfs-tools/conf.d/resume file: RESUME=UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 /etc/crypttab file: cryptswap1 /dev/sda8 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 output of sudo swapon -as: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 1874940 0 -1 output of sudo free -m: total used free shared buffers cached Mem: 1476 1296 179 0 35 671 -/+ buffers/cache: 590 886 Swap: 1830 0 1830 So, how can this be fixed?

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • A very useful custom component

    - by Kevin Smith
    Whenever I am debugging a problem in WebCenter Content (WCC) I often find it useful to see the contents of the internal data binder used by WCC when executing a service. I want to know the value of all parameters passed in by the caller, either a user in the web GUI or from an application calling the service via RIDC or web services. I also want to the know the value of binder variables calculated by WCC as it processes a service. What defaults has it applied based on configuration settings or profile rules? What values has it derived based on the user input? To help with this I created a  component that uses a java filter to dump out the contents of the internal data binder to the WCC trace file. It dumps the binder contents using the toString() method. You can register this filter code using many different filter hooks to see how the binder is updated as WCC processes the service. By default, it uses the validateStandard filter hook which is useful during a CHECKIN service. It uses the system trace section, so make sure that trace section is enabled before looking for the output from this component. Here is some sample output>system/6    10.09 09:57:40.648    IdcServer-1    filter: postParseDataForServiceRequest, binder start -- system/6    10.09 09:57:40.698    IdcServer-1    *** LocalData *** system/6    10.09 09:57:40.698    IdcServer-1    (10 keys + 0 defaults) system/6    10.09 09:57:40.698    IdcServer-1    ClientEncoding=UTF-8 system/6    10.09 09:57:40.698    IdcServer-1    IdcService=CHECKIN_UNIVERSAL system/6    10.09 09:57:40.698    IdcServer-1    NoHttpHeaders=0 system/6    10.09 09:57:40.698    IdcServer-1    UserDateFormat=iso8601 system/6    10.09 09:57:40.698    IdcServer-1    UserTimeZone=UTC system/6    10.09 09:57:40.698    IdcServer-1    dDocTitle=Check in from RIDC using Framework Folder system/6    10.09 09:57:40.698    IdcServer-1    dDocType=Document system/6    10.09 09:57:40.698    IdcServer-1    dSecurityGroup=Public system/6    10.09 09:57:40.698    IdcServer-1    parentFolderPath=/folder1/folder2 system/6    10.09 09:57:40.698    IdcServer-1    primaryFile=testfile5.bin     system/6    10.09 09:57:40.698    IdcServer-1    ***  RESULT SETS  ***>system/6    10.09 09:57:40.698    IdcServer-1    binder end -------------------------------------------- See the readme included in the component for more details. You can download the component from here.

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • FILESTREAM/FILETABLE Clarifications for Implementation

    - by user1209734
    Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues. A few things are not exactly clear to us: FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server). FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated. Can FILETABLE be referenced in another table with a foreign key? Thank you in advance EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270 In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment. EDIT 2: Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System. Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411 Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • Including configuration files while compiling a Flex application with MXMLC

    - by Daniel
    Hello there, I'm using: - Flex SDK 3.5.0 - Parsley 2.2.2. - Flash Builder 4 Down in my src folder (which is configured as part of the source path in the Flash Builder), I have a logging.xml which I configure via Parsley: FlexLoggingXmlSupport.initialize(); XmlContextBuilder.build("com/company/product/util/log/logging.xml"); When I run my application through Flash Builder, the XmlContentBuilder seems to locate the logging.xml (the implementation is a regular URLLoader one). When I compile my application using MXMLC (whether in Ant or command-line), and then run the swf, I get the following error: Cause(0): Error loading com/company/product/util/log/logging.xml: Error in URLLoader - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml Here is the MXMLC tag in Ant: <mxmlc file="${product.src.dir}/com/company/product/view/Main.mxml" output="${product.target.dir}/${product.release.filename}" keep-generated-actionscript="false"> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" /> <!-- source paths --> <source-path path-element="${FLEX_HOME}/frameworks" /> <compiler.source-path path-element="${product.src.dir}" /> <compiler.source-path path-element="${product.locale.dir}/{locale}" /> <compiler.library-path dir="${product.basedir}" append="true"> <include name="libs" /> </compiler.library-path> <warnings>false</warnings> <debug>false</debug> </mxmlc> And here is the command line: \mxmlc.exe -output "C:\temp\Rap.swf" -load-config "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks\flex-config.xml" -source-path "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks" C:\workspace\folder01\product\src C:\workspace\folder01\product\locale\en_US -library-path+=C:\workspace\folder01\product\libs -file-specs C:\workspace\folder01\product\src\com\company\product\view\main.mxml Now perhaps I don't get this correctly, but as far as I understand the SWF should be compiled with all of the resources in the paths I give MXMLC as source-paths. For some reason it seems that the XML file is not compiled into the SWF, hence the relative path of the XmlContentBuilder isn't located successfully. I could not find any argument to provide the MXMLC with that might solve this. I tried using the -dump-config option with the Flash Builder's compiler, then giving that configuration to MXMLC, but it didn't work either. I tried providing the XmlContentBuilder with an absolute path. That worked fine when I compiled with MXMLC via Ant, but still didn't work when I used MXMLC in the command-line... I'd be happy to be enlightened here, regarding all subjects - using MXMLC, accessing resources with relative paths, configuring logging in Parsley, etc. Many thanks in advance, Daniel

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • XML Socket and Regular Socket in Flash/Flex does not send message immediately.

    - by kramer
    I am trying to build a basic RIA where my Flex application client opens an XML socket, sends the xml message "people_list" to server, and prints out the response. I have ruby at the server side and I have successfully set up the security policy stuff. The ruby xml server, successfully accepts the connections from Flex, successfully detects when they are closed and can also send messages to client. But; there is a problem... It cannot receive messages from flex client. The messages sent from flex client are queued and sent as one package when the socket is closed. Therefore, the whole wait-for-request-then-reply thing is not working... This is also -kinda- mentioned in the XMLSocket.send() document, where it is stated that the messages are sent async so; they may be delivered at any time in future. But; I need them to be synced, flushed or whatever. This is the server side code: require 'socket' require 'observer' class Network_Reader_Ops include Observable @@reader_listener_socket = UDPSocket.new @@reader_broadcast_socket = UDPSocket.new @@thread_id def initialize @@reader_broadcast_socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_BROADCAST, 1) @@reader_broadcast_socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) @@reader_listener_socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) @@reader_broadcast_socket.bind('', 50050) @@reader_listener_socket.bind('', 50051) @@thread_id = Thread.start do loop do begin text, sender = @@reader_listener_socket.recvfrom_nonblock 1024 print("Knock response recived: ", text) notify_observers text rescue Errno::EAGAIN retry rescue Errno::EWOULDBLOCK retry end end end end def query @@reader_broadcast_socket.send("KNOCK KNOCK", 0, "255.255.255.255", 50050) end def stop Thread.kill @@thread_id end end class XMLSocket_Connection attr_accessor :connection_id def update (data) connection_id.write(data+"\0") end end begin # Set EOL for Flash $/ = '\x00' xml_socket = TCPServer.open('', '4444') security_policy_socket = TCPServer.open('', '843') xml_socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) security_policy_socket.setsockopt(Socket::SOL_SOCKET, Socket::SO_REUSEADDR, 1) a = Thread.start do network_ops = nil loop { accepted_connection = xml_socket.accept print(accepted_connection.peeraddr, " is accepted\n") while accepted_connection.gets incoming = $_.dump print("Received: ", incoming) if incoming == "<request>readers on network</request>" then network_ops = Network_Reader_Ops.new this_con = XMLSocket_Connection.new this_con.connection_id = accepted_connection network_ops.add_observer this_con network_ops.query end end if not network_ops.nil? then network_ops.delete_observer this_con network_ops.stop network_ops = nil end print(accepted_connection, " is gone\n") accepted_connection.close } end b = Thread.start do loop { accepted_connection = security_policy_socket.accept Thread.start do current_connection = accepted_connection while current_connection.gets if $_ =~ /.*policy\-file.*/i then current_connection.write("<cross-domain-policy><allow-access-from domain="*" to-ports="*" /></cross-domain-policy>\0") end end current_connection.close end } end a.join b.join rescue puts "FAILED" retry end And this is the flex/flash client side code: UPDATE: I have also tried using regular socket and calling flush() method but; the result was same. private var socket:XMLSocket = new XMLSocket(); protected function stopXMLSocket():void { socket.close(); } protected function startXMLSocket():void { socket.addEventListener(DataEvent.DATA, dataHandler); socket.connect(xmlSocketServer_address, xmlSocketServer_port); socket.send("<request>readers on network</request>"); } protected function dataHandler(event:DataEvent):void { mx.controls.Alert.show(event.data); } How do I achieve the described behaviour?

    Read the article

  • Problem with displaying and downloading email attachments using the zend framework

    - by Ali
    Hi guys I'm trying to display emails in my inbox and their respective attachments. At the same time I also wish to be able to download the attachments however I'm kinda stuck here. Here is the code I use to display the attachments: $one_message = $mail->getMessage($i); $one_message->id = $i; $one_message->UID = $mail->getUniqueId($i); // get the contact of this message $email = array_pop(extract_emails_from($one_message->from)); $one_message->contactFrom = $contact_manager->get_person_from_email($email); $one_message->parts = array(); foreach (new RecursiveIteratorIterator($mail->getMessage($i)) as $ii=>$part) { try { $tpart = $part; $one_message->parts[$ii] = $tpart; // put the part of the message indexed with what I assume is its position if (strtok($part->contentType, ';') == 'text/html') { $b = $part->getContent(); $h2t->set_html($b); $one_message->body = $h2t->get_text(); } if (strtok($part->contentType, ';') == 'text/plain') { $b = $part->getContent(); $one_message->body = $part->getContent(); } } catch (Zend_Mail_Exception $e) { // ignore } } And I display the attachments using the following link: ?action=download&element=email-attachment&box=inbox&uid=<?php echo $one_message->UID;?>&id=<?php echo $one_message->id;?>&part=<?php echo $ii;?> The part variable is the index taken from the loop above However when I try to download using the exact same code as above modified slightly: $id = $_GET['id']; $pid = $_GET['part']; $one_message = $mail->getMessage($id); foreach (new RecursiveIteratorIterator($mail->getMessage($id)) as $ii=>$part) { if($pid == $ii): $content = base64_decode($part->getContent()); $cnt_typ = explode(";" , $part->contentType); $name = explode("=",$cnt_typ[1]); $filename = $name[1];//It is the file name of the attachement in browser //This for avoiding " from the file name when sent from yahoomail $filename = str_replace('"'," ",$filename); $filename = trim($filename); header("Cache-Control: public"); header("Content-Type: ".$cn_typ[1]); header("Content-Disposition: attachment; filename=\"" . $filename . "\""); header("Content-Transfer-Encoding: binary"); echo $content; exit; endif; } For some reason it doesn't work at all. I did a var dump and noticed that in the for loop with the reverseiterator teh indexes $ii returned are 1 - 2 - 2 !! Whats going on here how can the indexes be repeated? How can I tell one part from the others?

    Read the article

  • Parallel.For System.OutOfMemoryException

    - by Martin Neal
    We have a fairly simple program that's used for creating backups. I'm attempting to parallelize it but am getting an OutofMemoryException within an AggregateExcption. Some of the source folders are quite large, and the program doesn't crash for about 40 minutes after it starts. I don't know where to start looking so the below code is a near exact dump of all code the code sans directory structure and Exception logging code. Any advise as to where to start looking? using System; using System.Diagnostics; using System.IO; using System.Threading.Tasks; namespace SelfBackup { class Program { static readonly string[] saSrc = { "\\src\\dir1\\", //... "\\src\\dirN\\", //this folder is over 6 GB }; static readonly string[] saDest = { "\\dest\\dir1\\", //... "\\dest\\dirN\\", }; static void Main(string[] args) { Parallel.For(0, saDest.Length, i => { try { if (Directory.Exists(sDest)) { //Delete directory first so old stuff gets cleaned up Directory.Delete(sDest, true); } //recursive function clsCopyDirectory.copyDirectory(saSrc[i], sDest); } catch (Exception e) { //standard error logging CL.EmailError(); } }); } } /////////////////////////////////////// using System.IO; using System.Threading.Tasks; namespace SelfBackup { static class clsCopyDirectory { static public void copyDirectory(string Src, string Dst) { Directory.CreateDirectory(Dst); /* Copy all the files in the folder If and when .NET 4.0 is installed, change Directory.GetFiles to Directory.Enumerate files for slightly better performance.*/ Parallel.ForEach<string>(Directory.GetFiles(Src), file => { /* An exception thrown here may be arbitrarily deep into this recursive function there's also a good chance that if one copy fails here, so too will other files in the same directory, so we don't want to spam out hundreds of error e-mails but we don't want to abort all together. Instead, the best solution is probably to throw back up to the original caller of copy directory an move on to the next Src/Dst pair by not catching any possible exception here.*/ File.Copy(file, //src Path.Combine(Dst, Path.GetFileName(file)), //dest true);//bool overwrite }); //Call this function again for every directory in the folder. Parallel.ForEach(Directory.GetDirectories(Src), dir => { copyDirectory(dir, Path.Combine(Dst, Path.GetFileName(dir))); }); } }

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >