Search Results

Search found 3996 results on 160 pages for 'flavour of the month'.

Page 150/160 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Hour-long shutdown duration "shutting down hyper-v virtual machine management service"

    - by icelava
    I have a Windows 2008 R2 server that is a Hyper-V host (Dell PowerEdge T300). Today for the first time I encountered an odd situation; i lost connection with one of the guest machines but logging on physically it seems the guest OS is still running but no longer contactable via the network. I tried to shut down the guest machine (Windows XP) but it would not shut down, getting stuck in a "Not responding" dialog box that cannot be dismissed. I used the Hyper-V management console to reset the machine and it could not get out of resetting state. I tried to save another Windows 2003 guest machine, and it would be progress with its Saving state (0%). The other running Windows 2003 guest was stuck in the logon dialog. My first suspicion is perhaps one of the Windows update patches this week (10 Nov 2011) may something to do with it, which was still pending a system restart. Well, since I could not do anything with Hyper-V i proceeded with the Windows Update restart, and now it is stuck half an hour at "Shutting down hyper-v virtual machine management service" Prior to restarting I did not observe any hard disk errors reported in the system event log; doubt it is a disk-related condition. Shall I force a hard reboot? UPDATE Ok so i left it hanging over an hour while attending to other matters, and thankfully the host cleanly restarted. I can operate the guest machines fine now. Phew. Hyper-V must have been crawling for some reason. The VMs have been observed to become slow in the past when the host has been up for a long duration (two weeks to a month), but never this slow. Would love to know what types of performance monitoring items i can observe to give a hint why this can happen. UPDATE 2012-02-13 In the months ever since, Hyper-V has stalled into this state another two times. It appears so randomly and without any error event logs to hint what is causing it enter this "drunkard" state. Just an Hyper-V management service timeout. Log Name: System Source: Service Control Manager Date: 13/2/2012 9:16:48 AM Event ID: 7043 Task Category: None Level: Error Keywords: Classic User: N/A Computer: elune Description: The Hyper-V Virtual Machine Management service did not shut down properly after receiving a preshutdown control. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7043</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2012-02-13T01:16:48.882901900Z" /> <EventRecordID>567844</EventRecordID> <Correlation /> <Execution ProcessID="764" ThreadID="8484" /> <Channel>System</Channel> <Computer>elune</Computer> <Security /> </System> <EventData> <Data Name="param1">Hyper-V Virtual Machine Management</Data> </EventData> </Event> The only means out of it is to restart the system.

    Read the article

  • Issues Converting Plain Text Into Microsoft Word Bulleted Lists

    - by user787832
    I'm a programmer. I hate status reports. I found a way to live with it. While I am working in my IDE ( Visual Slickedit ) I keep a plain text file open in one of the file/buffer tabs. As I finish things I just jot down a quick note into that file. At the end of the week that becomes my weekly status report. Example entries: The Datatables.net plugin runs very slowly in IE 8 with more than 2,000 records. I changed the way I did the server side code to process the data to make less work for the plugin to get decent performance for the IE 8 users. I made a class to wrap data from the new data collection objects into the legacy data holder objects. This will let the new database code be backward compatible with the legacy code until we can replace it. I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site At the end of the month I go back to each weekly *.txt file and paste all of the entries into a MS Word file for a monthly report. I give the monthly report to a liason to the contracting company who has to compile everyone's monthly reports into a single MS Word 2007 document. His problem, soon to be my problem, comes when he highlights my paragraphs like the ones above to put bullets in front of my paragraphs. When he highlights my notes to put bullets in front of them with MS Word 2007, Word rearranges the text a bit and the new line chars/carriage returns stagger the text so the text is no longer in neat chunks. This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site Becomes This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site I tried turning word wrap on in my IDE for the text files I put my status notes in. It just puts some kind of newline character in anyway. Searching/Replacing those chars in the text files has the result of destroying the paragraphs. Once my notes are pasted into MS Word, Word automatically translates them into paragraph breaks. Searching/Replacing them there has similar results. Blank lines separating the notes disappears. One big mess. What I would like is to be able to keep adding my status notes to a text file as I am now, but do something different when I paste the notes into MS Word such that my liason can select the text, hit the bulleting command and NOT have the staggered text as shown above. Any ideas? Thanks much in advance Steve

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • Hoster not fulfilling contract: how to get money back?

    - by plua
    For several years, we have as a small webdesign company rented a dedicated server at a large hosting provider. They had several support levels. When we signed up for this, we had very limited in-house knowledge about server maintenance, and were very worried about the security of our server. We therefore took one of the more expensive support packages. An important aspect in this were these claims: [PROVIDER] verifies the availability of the latest security updates and sends you a notification to see if you are interested to have them installed [PROVIDER] verifies the availability of the latest supported software updates and sends you a notification to see if you are interested to have them installed These items were clearly stated on their website as being part of the advantage of this package.; With not enough knowledge about installing and updating such software on a Linux server, we decided to go for this package. We paid a premium of $50 per month over the maintenance package that is next in line ($100 vs $50). Over the years, we have paid several thousand dollars for this service. Then came the moment that I learned more and more about server management. And I found out step by step that our server was horrendously outdated! We had an OS that was hardly updated, our anti-virus was not working because it needed certain more recent packages on the OS, and in general there were a whole bunch of security vulnerabilities and fixes that were lacking. Shocked, I wrote the provider. Turns out, they decided unilaterally that they would not send out any notifications to clients because clients would get too many e-mails. This is a quote from their explanation: [...] We have decided not to spam its clients with OS and security updates and only install them whenever asked by the client I was shocked! They had never mentioned that they would drop this service, and in fact the claims about updating their clients through e-mail was still on their website, after they apparently stopped doing this years ago! Upon finding this out, I requested they refund all that we have paid as a premium over the other package, and make it available as future credit with their own company. I thought this was a very reasonable request. However, they said they would only go back one year and provide credit for this one year. Mails went back and forth, but they were not willing to give credit for the whole period, which I felt I was entitled to. So ultimately I left the hosting company, and filed a complaint with the BBB a while ago. Now, I am not the kind of person who runs to a lawyer for any minor thing, but in this case I am really considering taking action. I have been paying for years for a service I did not receive (the premium package had a few other pluses, but we took it primarily for these two points, and I can prove that we did not use the other benefits). For our small company the hosting costs were a very large part of our budget, and I feel it is very unfair how this large provider just does not care about not fulfilling its obligations. So my question is: what action should I take? Is a lawyer the only next step, or are there other suggestions? And am I right here to claim this money, or are they right that there is some sort of statue of limitations on such claims? Any feedback is appreciated.

    Read the article

  • Trac vs. Redmine vs. JIRA vs. FogBugz for one-man shop?

    - by kizzx2
    Background I am a one-man freelancer looking for a project management software that can provide the following requirements. I have used Trac for about a year now. Tried Redmine and FogBugz on Demand for a couple of weeks. Never tried JIRA before. Basically, I'm looking for a piece of software that: Facilitates developer-client communication/collaboration Does time tracking Requirements Record time estimates/Time tracking Clients must be able to create/edit his own tickets/cases Clients must not see Developer created tickets/cases (internal) Affordable (price) with multiple clients Nice-to-haves Supports multiple projects in one installation Free eclipse integration (Mylyn) Easy time-tracking without using the Web UI (Trac's post commit hook or Redmine's commit message scanning) Clients can access the Wiki Export the data to standard formats My evaluation Trac can basically fulfill most of the above requirements, but with lots of customizations and plug-ins that it doesn't feel so clean. One downside is that the main trunk (0.11) has been around for a year or more and I still haven't seen much tendency of any upgrades coming up. Redmine has the cleanest Web UI. It's design philosophy seems to be the most elegant, with its innovative commit message scanning and stuff. However, the current version doesn't seem to be very mature and stable yet. It doesn't support internal (private) tickets and the time-tracking commit message patch doesn't support the trunk version. The good thing about it is that the main trunk still seems to be actively developed. FogBugz is actually a very well written piece of software. However the idea of paying $25/month for the client to be able to log-in to the system seems a little bit too far off for an individual developer. The free version supports letting clients create/view their own cases using email, which is a sub-optimal alternative to having a full-fledged list of the user's own cases. That also means clients can't read/write wiki pages. Its time-tracking approach is innovative and good though. However the fact that all the eclipse integration (Bugclipse, Foglyn) are commercial. Yet other investments before I can use my bug-tracker! If I revert back to the Web UI, it's not really a fast rendering Web service. Also, the in-built report functions are excellent (e.g. evidence based scheduling) JIRA is something I have zero experience with. Can someone with JIRA experience recommend why it might be a good fit for this particular situation? Question Can we share experience on this? Any specific plugins/customizations would that would best suit the requirements for this case?

    Read the article

  • FullCalendar events from asp.net ASHX page not displaying

    - by Steve Howard
    Hi I have been trying to add some events to the fullCalendar using a call to a ASHX page using the following code. Page script: <script type="text/javascript"> $(document).ready(function() { $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month, agendaWeek,agendaDay' }, events: 'FullCalendarEvents.ashx' }) }); c# code: public class EventsData { public int id { get; set; } public string title { get; set; } public string start { get; set; } public string end { get; set; } public string url { get; set; } public int accountId { get; set; } } public class FullCalendarEvents : IHttpHandler { private static List<EventsData> testEventsData = new List<EventsData> { new EventsData {accountId = 0, title = "test 1", start = DateTime.Now.ToString("yyyy-MM-dd"), id=0}, new EventsData{ accountId = 1, title="test 2", start = DateTime.Now.AddHours(2).ToString("yyyy-MM-dd"), id=2} }; public void ProcessRequest(HttpContext context) { context.Response.ContentType = "application/json."; context.Response.Write(GetEventData()); } private string GetEventData() { List<EventsData> ed = testEventsData; StringBuilder sb = new StringBuilder(); sb.Append("["); foreach (var data in ed) { sb.Append("{"); sb.Append(string.Format("id: {0},", data.id)); sb.Append(string.Format("title:'{0}',", data.title)); sb.Append(string.Format("start: '{0}',", data.start)); sb.Append("allDay: false"); sb.Append("},"); } sb.Remove(sb.Length - 1, 1); sb.Append("]"); return sb.ToString(); } } The ASHX page gets called and returnd the following data: [{id: 0,title:'test 1',start: '2010-06-07',allDay: false},{id: 2,title:'test 2',start: '2010-06-07',allDay: false}] The call to the ASHX page does not display any results, but if I paste the values returned directly into the events it displays correctly. I am I have been trying to get this code to work for a day now and I can't see why the events are not getting set. Any help or advise on how I can get this to work would be appreciated. Steve

    Read the article

  • Wpf Combobox in Master/Detail MVVM

    - by isak
    I have MVVM master /details like this: <Window.Resources> <DataTemplate DataType="{x:Type model:EveryDay}"> <views:EveryDayView/> </DataTemplate> <DataTemplate DataType="{x:Type model:EveryMonth}"> <views:EveryMonthView/> </DataTemplate> </Window.Resources> <Grid> <ListBox Margin="12,24,0,35" Name="schedules" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Elements}" SelectedItem="{Binding Path=CurrentElement}" DisplayMemberPath="Name" HorizontalAlignment="Left" Width="120"/> <ContentControl Margin="168,86,32,35" Name="contentControl1" Content="{Binding Path=CurrentElement.Schedule}" /> <ComboBox Height="23" Margin="188,24,51,0" Name="comboBox1" VerticalAlignment="Top" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Schedules}" SelectedItem="{Binding Path=CurrentElement.Schedule}" DisplayMemberPath="Name" SelectedValuePath="ID" SelectedValue="{Binding Path=CurrentElement.Schedule.ID}" /> </Grid> This Window has DataContext class: public class MainViewModel : INotifyPropertyChanged { public MainViewModel() { _elements.Add(new Element("first", new EveryDay("First EveryDay object"))); _elements.Add(new Element("second", new EveryMonth("Every Month object"))); _elements.Add(new Element("third", new EveryDay("Second EveryDay object"))); _schedules.Add(new EveryDay()); _schedules.Add(new EveryMonth()); } private ObservableCollection<ScheduleBase> _schedules = new ObservableCollection<ScheduleBase>(); public ObservableCollection<ScheduleBase> Schedules { get { return _schedules; } set { _schedules = value; this.OnPropertyChanged("Schedules"); } } private Element _currentElement = null; public Element CurrentElement { get { return this._currentElement; } set { this._currentElement = value; this.OnPropertyChanged("CurrentElement"); } } private ObservableCollection<Element> _elements = new ObservableCollection<Element>(); public ObservableCollection<Element> Elements { get { return _elements; } set { _elements = value; this.OnPropertyChanged("Elements"); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } One of Views: <UserControl x:Class="Views.EveryDayView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid > <GroupBox Header="Every Day Data" Name="groupBox1" VerticalAlignment="Top"> <Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <TextBox Name="textBox2" Text="{Binding Path=AnyDayData}" /> </Grid> </GroupBox> </Grid> I have problem with SelectedItem in ComboBox.It doesn't works correctly.

    Read the article

  • Mac - Flash file not loaded in independent flash player

    - by Mugdha
    Hi, I am working on an independent application to play flash files on Mac. I have already done the same for Linux, and it works flawlessly but on mac for some reason flash is not drawing to my window. It is not throwing any kind of error too. I am using Flash player 10, that would mean that I am using the Core Graphics drawing model. I am able to send mouse events to flash and wrote a sample plugin to check if there was a problem in the context that I was sending, but my sample plugin draws properly to the window. I am getting a call for NPN_InvalidateRect twice and as a response I send an update Event back to flash. I drew a dummy rectangle to check that my context is correct. I have flipped the context to make the origin as top left corner. On doing right click on the debug version of the flash player it shows the following message: "Movie not loaded..." Can anyone give me any idea why is the content not being drawn? I would really appreciate the help, as I have been struggling with it for more than a month now. Here is a small log of the interaction that I have with flash: NPN_UserAgent Called NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPN_GetValue Called with variable NPNVSupportsWindowless; return true NPN_SetValue Called for Variable - NPPVpluginTransparentBool; return true NPN_GetValue Called with variable NPNVsupportsCoreGraphicsBool; return true NPN_SetValue Called for Variable - NPNVpluginDrawingModel NPP_SetWindow (CoreGraphics): 0, window=0xebaa90, context=0xe4c930, window.x:0 window.y:22 window.width:480 window.height:270 NPP_HandleEvent(activateEvent) accepted:0 isActive: 1 NPP_HandleEvent(updateEvt) accepted: 1 NPN_UserAgent Called NPN_GetURLNotify Called with URL - javascript:top.location+"flashplugin_unique" NPN_GetValue Called with variable NPNVWindowNPObject; return NULL NPP_NewStream URL=/Users/mjain/Desktop/clock.swf MIME=application/x-shockwave-flash error=0 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPN_InvalidateRect Called NPP_Write responseURL=/Users/mjain/Desktop/clock.swf bytes=9925 total-delivered=9925/9925 NPP_WriteReady responseURL=/Users/mjain/Desktop/clock.swf bytes=268435455 NPP_DestroyStream responseURL=/Users/mjain/Desktop/clock.swf error=0 NPP_HandleEvent(updateEvt) accepted: 1 NPN_InvalidateRect Called NPP_HandleEvent(updateEvt) accepted: 1 NPP_NewStream URL=javascript:top.location+"flashplugin_unique" MIME=text/plain error=0 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPN_UserAgent Called NPP_Write responseURL=javascript:top.location+"flashplugin_unique" bytes=52 total-delivered=52/52 NPP_WriteReady responseURL=javascript:top.location+"flashplugin_unique" bytes=16000 NPP_DestroyStream responseURL=javascript:top.location+"flashplugin_unique" error=0 NPP_URLNotify responseURL=javascript:top.location+"flashplugin_unique" reason=0 Thanks Mugdha.

    Read the article

  • Computer science undergraduate project ideas

    - by Mehrdad Afshari
    Hopefully, I'm going to finish my undergraduate studies next semester and I'm thinking about the topic of my final project. And yes, I've read the questions with duplicate title. I'm asking this from a bit different viewpoint, so it's not an exact dupe. I've spent at least half of my life coding stuff in different languages and frameworks so I'm not looking at this project as a way to learn much about coding and preparing for real world apps or such. I've done lots of those already. But since I have to do it to complete my degree, I felt I should spend my time doing something useful instead of throwing the whole thing out. I'm planning to make it an open source project or a hosted Web app (depending on the type) if I can make a high quality thing out of it, so I decided to ask StackOverflow what could make a useful project. Situation I've plenty of freedom about the topic. They also require 30-40 pages of text describing the project. I have the following points in mind (the more satisfied, the better): Something useful for software development Something that benefits the community Having academic value is great Shouldn't take more than a month of development (I know I'm lazy). Shouldn't be related to advanced theoretical stuff (soft computing, fuzzy logic, neural networks, ...). I've been a business-oriented software developer. It should be software oriented. While I love hacking microcontrollers and other fun embedded electronic things, I'm not really good at soldering and things like that. I'm leaning toward a Web application (think StackOverflow, PasteBin, NerdDinner, things like those). Technology It's probably going to be done in .NET (C#, F#) and Windows platform. If I really like the project (cool low level hacking), I might actually slip to C/C++. But really, C# is what I'm efficient at. Ideas Programming language, parsing and compiler related stuff: Designing a domain specific programming language and compiler Templating language compiled to C# or IL Database tools and related code generation stuff Web related technologies: ASP.NET MVC View engine doing something cool (don't know what exactly...) Specific-purpose, small, fast ASP.NET-based Web framework Applications: Visual Studio plugin to integrate with Bazaar (it's too much work, I think). ASP.NET based, jQuery-powered issue tracker (and possibly, project lifecycle management as a whole - poor man's TFS) Others: Something related to GPGPU Looking forward for great ideas! Unfortunately, I can't help on a currently existing project. I need to start my own to prevent further problems (as it's an undergrad project, nevertheless).

    Read the article

  • EPPlus - .xlsx is locked for editing by 'another user'

    - by AdamTheITMan
    I have searched through every possible answer on SO for a solution, but nothing has worked. I am basically creating an excel file from a database and sending the results to the response stream using EPPlus(OpenXML). The following code gives me an error when trying to open my generated excel sheet "[report].xlsx is locked for editing by 'another user'." It will open fine the first time, but the second time it's locked. Dim columnData As New List(Of Integer) Dim rowHeaders As New List(Of String) Dim letter As String = "B" Dim x As Integer = 0 Dim trendBy = context.Session("TRENDBY").ToString() Dim dateHeaders As New List(Of String) dateHeaders = DirectCast(context.Session("DATEHEADERS"), List(Of String)) Dim DS As New DataSet DS = DirectCast(context.Session("DS"), DataSet) Using excelPackage As New OfficeOpenXml.ExcelPackage Dim excelWorksheet = excelPackage.Workbook.Worksheets.Add("Report") 'Add title to the top With excelWorksheet.Cells("B1") .Value = "Account Totals by " + If(trendBy = "Months", "Month", "Week") .Style.Font.Bold = True End With 'add date headers x = 2 'start with letter B (aka 2) For Each Header As String In dateHeaders With excelWorksheet.Cells(letter + "2") .Value = Header .Style.HorizontalAlignment = OfficeOpenXml.Style.ExcelHorizontalAlignment.Right .AutoFitColumns() End With x = x + 1 letter = Helper.GetColumnIndexToColumnLetter(x) Next 'Adds the descriptive row headings down the left side of excel sheet x = 0 For Each DC As DataColumn In DS.Tables(0).Columns If (x < DS.Tables(0).Columns.Count) Then rowHeaders.Add(DC.ColumnName) End If Next Dim range = excelWorksheet.Cells("A3:A30") range.LoadFromCollection(rowHeaders) 'Add the meat and potatoes of report x = 2 For Each dTable As DataTable In DS.Tables columnData.Clear() For Each DR As DataRow In dTable.Rows For Each item As Object In DR.ItemArray columnData.Add(item) Next Next letter = Helper.GetColumnIndexToColumnLetter(x) excelWorksheet.Cells(letter + "3").LoadFromCollection(columnData) With excelWorksheet.Cells(letter + "3") .Formula = "=SUM(" + letter + "4:" + letter + "6)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "7") .Formula = "=SUM(" + letter + "8:" + letter + "11)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "12") .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "13") .Formula = "=SUM(" + letter + "14:" + letter + "20)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "21") .Formula = "=SUM(" + letter + "22:" + letter + "23)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "24") .Formula = "=SUM(" + letter + "25:" + letter + "26)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "27") .Formula = "=SUM(" + letter + "28:" + letter + "29)" .Style.Font.Bold = True .Style.Font.Size = 12 End With With excelWorksheet.Cells(letter + "30") .Formula = "=SUM(" + letter + "3," + letter + "7," + letter + "12," + letter + "13," + letter + "21," + letter + "24," + letter + "27)" .Style.Font.Bold = True .Style.Font.Size = 12 End With x = x + 1 Next range.AutoFitColumns() 'send it to response Using stream As New MemoryStream(excelPackage.GetAsByteArray()) context.Response.Clear() context.Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" context.Response.AddHeader("content-disposition", "attachment; filename=filetest.xlsx") context.Response.OutputStream.Write(stream.ToArray(), 0, stream.ToArray().Length) context.Response.Flush() context.Response.Close() End Using End Using

    Read the article

  • Get non-overlapping dates ranges for prices history data

    - by Anonymouse
    Hello, Let's assume that I have the following table: CREATE TABLE [dbo].[PricesHist]( [Product] varchar NOT NULL, [Price] [float] NOT NULL, [StartDate] [datetime] NOT NULL, [EndDate] [datetime] NOT NULL ) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2C00000000 AS DateTime), CAST(0x00009D2C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2D00000000 AS DateTime), CAST(0x00009D2D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D2E00000000 AS DateTime), CAST(0x00009D2E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3000000000 AS DateTime), CAST(0x00009D3000000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3100000000 AS DateTime), CAST(0x00009D3100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3400000000 AS DateTime), CAST(0x00009D3400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3500000000 AS DateTime), CAST(0x00009D3500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3600000000 AS DateTime), CAST(0x00009D3600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3700000000 AS DateTime), CAST(0x00009D3700000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3800000000 AS DateTime), CAST(0x00009D3800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3A00000000 AS DateTime), CAST(0x00009D3A00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3B00000000 AS DateTime), CAST(0x00009D3B00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3C00000000 AS DateTime), CAST(0x00009D3C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3D00000000 AS DateTime), CAST(0x00009D3D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3E00000000 AS DateTime), CAST(0x00009D3E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3F00000000 AS DateTime), CAST(0x00009D3F00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4100000000 AS DateTime), CAST(0x00009D4100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4200000000 AS DateTime), CAST(0x00009D4200000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4300000000 AS DateTime), CAST(0x00009D4300000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4400000000 AS DateTime), CAST(0x00009D4400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4500000000 AS DateTime), CAST(0x00009D4500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4600000000 AS DateTime), CAST(0x00009D4600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4800000000 AS DateTime), CAST(0x00009D4800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4A00000000 AS DateTime), CAST(0x00009D4A00000000 AS DateTime)) As you can see, there are two prices on that month for Apples. 4.90 and 2.50. In order to tidy this table up, I need to get this information as a date range rather than a row per day as it currently is. I can obviously do this with Min and Max aggregates easily but the ranges overlap and other business code expect non-overlapping ranges. I also tried to achieve this with self joins and row_number(), but without much success... Here is what I'm trying to achieve as the output: Product | StartDate | EndDate | Price ------------------------------------------- Apples | 01 Mar 2010 | 02 Mar 2010 | 4.90 Apples | 03 Mar 2010 | 03 Mar 2010 | 2.50 Apples | 05 Mar 2010 | 09 Mar 2010 | 4.90 Apples | 10 Mar 2010 | 10 Mar 2010 | 2.50 Apples | 11 Mar 2010 | 16 Mar 2010 | 4.90 Apples | 17 Mar 2010 | 17 Mar 2010 | 2.50 Apples | 18 Mar 2010 | 23 Mar 2010 | 4.90 Apples | 24 Mar 2010 | 24 Mar 2010 | 2.50 Apples | 25 Mar 2010 | 30 Mar 2010 | 4.90 Apples | 31 Mar 2010 | 31 Mar 2010 | 2.50 What would please be the best approach to get this done? Thanks a lot in advance,

    Read the article

  • Single Sign On for a Web App

    - by Jeremy Goodell
    I have been trying to understand how this problem is solved for over a month now. I really need to come up with a general approach that works -- I'm basically the only resource who can do it. I have a theory, but I'm just not sure it's the easiest (or correct) approach and I haven't been able to find any information to support my ideas. Here's the scenario: 1) You have a complex web application that offers secure content on a subscription basis. 2) Users are required to log in to your application with user name and password. 3) You sell to large corporations, which already have a corporate authentication technology (for example, Active Directory). 4) You would like to integrate with the corporate authentication mechanism to allow their users to log onto your Web App without having to enter their user name and password. Now, any solution you come up with will have to provide a mechanism for: adding new users removing users changing user information allowing users to log in Ideally, all these would happen "automagically" when the corporate customer made the corresponding changes to their own authentication. Now, I have a theory that the way to do this (at least for Active Directory) would be for me to write a client-side app that integrates with the customer's Active Directory to track the targeted changes, and then communicate those changes to my Web App. I think that if this communication were done via Web Services offered by my web app, then it would maintain an unhackable level of security, which would obviously be a requirement for these corporate customers. I've found some information about a Microsoft product called Active Directory Federation Service (ADFS) which may or may not be the right approach for me. It seems to be a bit bulky and have some requirements that might not work for all customers. For other existing ID scenarios (like Athens and Shibboleth), I don't think a client application is necessary. It's probably just a matter of tying into the existing ID services. I would appreciate any advice anyone has on anything I've mentioned here. In particular, if you can tell me if my theory is correct about providing a client-side app that communicates with server-side Web Services, or if I'm totally going in the wrong direction. Also, if you could point me at any web sites or articles that explain how to do this, I'd really appreciate it. My research has not turned up much so far. Finally, if you could let me know of any Web applications that currently offer this service (particularly as tied to a corporate Active Directory), I would be very grateful. I am wondering if other B2B Web app's like salesforce.com, or hoovers.com offer a similar service for their corporate customers. I hate being in the dark and would greatly appreciate any light you can shed ... Jeremy

    Read the article

  • mysql never releases memory

    - by Ishu
    I have a production server clocking about 4 million page views per month. The server has got 8GB of RAM and mysql acts as a database. I am facing problems in handling mysql to take this load. I need to restart mysql twice a day to handle this thing. The problem with mysql is that it starts with some particular occupation, the memory consumed by mysql keeps on increasing untill it reaches the maximum it can consume and then mysql stops responding slowly or does not respond at all, which freezes the server. All my tables are indexed properly and there are no long queries. I need some one to help on how to go about debugging what to do here. All my tables are myisam. I have tried configuring the parameters key_buffer etc but to no rescue. Any sort of help is greatly appreciated. Here are some parameters which may help. mysql --version mysql Ver 14.12 Distrib 5.0.77, for redhat-linux-gnu (i686) using readline 5.1 mysql> show variables; +---------------------------------+------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/ | | bdb_cache_size | 8384512 | | bdb_home | /var/lib/mysql/ | | bdb_log_buffer_size | 262144 | | bdb_logdir | | | bdb_max_lock | 10000 | | bdb_shared_data | OFF | | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | NO | | have_bdb | YES | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | NO | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | | init_connect | | | init_file | | | init_slave | | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 2621440000 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | ON | | log_warnings | 1 | | long_query_time | 8 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 8388608 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 400 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 4294967295 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 16777216 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2000 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/run/mysqld/mysqld.pid | | plugin_dir | | | port | 3306 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 134217728 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 2097152 | | read_only | OFF | | read_rnd_buffer_size | 8388608 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 1 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /var/lib/mysql/mysql.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 8 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77-log | | version_bdb | Sleepycat Software: Berkeley DB 4.1.24: (January 29, 2009) | | version_comment | Source distribution | | version_compile_machine | i686 | | version_compile_os | redhat-linux-gnu | | wait_timeout | 28800 | +---------------------------------+------------------------------------------------------------+

    Read the article

  • tinymce not working with chrome when i dynamically setcontent

    - by oo
    I have a site that i put: <body onload="ajaxLoad()" > I have a javascript function that then shove data from my db into the text editor by using the setContent method in javascript of the textarea. seems fine in firefox and IE but in chrome sometimes nothing shows up. no error, just blank editor in the body section: <textarea id="elm1" name="elm1" rows="40" cols="60" style="width: 100%"> </textarea> in the head section: function ajaxLoad() { var ed = tinyMCE.get('elm1'); ed.setProgressState(1); // Show progress window.setTimeout(function() { ed.setProgressState(0); // Hide progress ed.setContent('<p style="text-align: center;"><strong><br /><span style="font-size: small;">General Manager&#39;s Corner</span></strong></p><p style="text-align: center;">August&nbsp;2009</p><p>It&rsquo;s been 15<sup>th</sup> and so have a Steak Night (Saturday, 15<sup>th</sup>) and a shore Dinner planned (Saturday, 22<sup>nd</sup>) this month. urday, September 5<sup>th</sup>. e a can&rsquo;t missed evening, shas extended it one additional week. The last clinic will be the week of August 11<sup>th</sup>. </p><p>&nbsp;Alt (Tuesday through Thursday) </p><p>&nbsp;I wouClub.</p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;</p><p>&nbsp;<strong></strong></p>'); }, 1); } i am not sure if its some of the formatting that chrome is reject but it seems like if tinymce can parse it in one browser it can do it in any browser so i am confused. any suggestions?

    Read the article

  • Something wrong with my XML?

    - by Prateek Raj
    hi everyone, i'm parsing an xml with my extjs but it returns only one of the five components. only the first one of the five components. Ext.regModel('Card', { fields: ['investor'] }); var store = new Ext.data.Store({ model: 'Card', proxy: { type: 'ajax', url: 'xmlformat.xml', reader: { type: 'xml', record: 'investors' } }, listeners: { single: true, datachanged: function(){ Ext.getBody().unmask(); var items = []; store.each(function(rec){ alert(rec.get('investor')); }); and my xml file is: <?xml version="1.0" encoding="UTF-8"?> <root> <investors> <investor>Active</investor> <investor>Aggressive</investor> <investor>Conservative</investor> <investor>Day Trader</investor> <investor>Very Active</investor> </investors> <events> <event>3 Month Expiry</event> <event>LEAPS</event> <event>Monthlies</event> <event>Monthly Expiries</event> <event>Weeklies</event> </events> <prices> <price>$0.5</price> <price>$0.05</price> <price>$1</price> <price>$22</price> <price>$100.34</price> </prices> </root> wen i run the code only "Active" comes out. . . . i know that i'm doing something wrong but i'm not sure what.... please help . . . . .

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Which OAuth library do you find works best for Objective-C/iPhone?

    - by Brennan
    I have been looking to switch to OAuth for my Twitter integration code and now that there is a deadline in less than 7 weeks (see countdown link) it is even more important to make the jump to OAuth. I have been doing Basic Authentication which is extremely easy. Unfortunately OAuth does not appear to be something that I would whip together in a couple of hours. http://www.countdowntooauth.com/ So I am looking to use a library. I have put together the following list. MPOAuth MGTwitterEngine OAuthConsumer I see that MPOAuth has some great features with a good deal of testing code in place but there is one big problem. It does not work. The sample iPhone project that is supposed to authenticate with Twitter causes an error which others have identified and logged as a bug. http://code.google.com/p/mpoauthconnection/issues/detail?id=29 The last code change was March 11 and this bug was filed on March 30. It has been over a month and this critical bug has not been fixed yet. So I have moved on to MGTwitterEngine. I pulled down the source code and loaded it up in Xcode. Immediately I find that there are a few dependencies and the README file does not have a clear list of steps to fetch those dependencies and integrate them with the project so that it builds successfully. I see this as a sign that the project is not mature enough for prime time. I see also that the project references 2 libraries for JSON when one should be enough. One is TouchJSON which has worked well for me so I am again discouraged from relying on this project for my applications. I did find that MGTwitterEngine makes use of OAuthConsumer which is one of many OAuth projects hosted by an OAuth project on Google Code. http://code.google.com/p/oauth/ http://code.google.com/p/oauthconsumer/wiki/UsingOAuthConsumer It looks like OAuthConsumer is a good choice at first glance. It is hosted with other OAuth libraries and has some nice documentation with it. I pulled down the code and it builds without errors but it does have many warnings. And when I run the new Build and Analyze feature in Xcode 3.2 I see 50 analyzer results. Many are marked as potential memory leaks which would likely lead to instability in any app which uses this library. It seems there is no clear winner and I have to go with something before the big Twitter OAuth deadline. Any suggestions?

    Read the article

  • Optimized .htaccess???

    - by StackOverflowNewbie
    I'd appreciate some feedback on the compression and caching configuration below. Trying to come up with a general purpose, optimized compression and caching configuration. If possible: Note your PageSpeed and YSlow grades Add configuration to your .htaccess Clear your cache Note your PageSpeed and YSlow grades to see if there are any improvements (or degradations) NOTE: Make sure you have appropriate modules loaded. Any feedback is much appreciated. Thanks. # JavaScript MIME type issues: # 1. Apache uses "application/javascript": http://svn.apache.org/repos/asf/httpd/httpd/branches/1.3.x/conf/mime.types # 2. IIS uses "application/x-javascript": http://technet.microsoft.com/en-us/library/bb742440.aspx # 3. SVG specification says it is text/ecmascript: http://www.w3.org/TR/2001/REC-SVG-20010904/script.html#ScriptElement # 4. HTML specification says it is text/javascript: http://www.w3.org/TR/1999/REC-html401-19991224/interact/scripts.html#h-18.2.2.2 # 5. "text/ecmascript" and "text/javascript" are considered obsolete: http://www.rfc-editor.org/rfc/rfc4329.txt #------------------------------------------------------------------------------- # Compression #------------------------------------------------------------------------------- <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/atom+xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/xml # The following MIME types are in the process of registration AddOutputFilterByType DEFLATE application/xslt+xml AddOutputFilterByType DEFLATE image/svg+xml # The following MIME types are NOT registered AddOutputFilterByType DEFLATE application/mathml+xml AddOutputFilterByType DEFLATE application/rss+xml # Deal with JavaScript MIME type issues AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddOutputFilterByType DEFLATE text/ecmascript AddOutputFilterByType DEFLATE text/javascript </IfModule> #------------------------------------------------------------------------------- # Expires header #------------------------------------------------------------------------------- <IfModule mod_expires.c> # 1. Set Expires to a minimum of 1 month, and preferably up to 1 year, in the future # (but not more than 1 year as that would violate the RFC guidelines) # 2. Use "Expires" over "Cache-Control: max-age" because it is more widely accepted ExpiresActive on ExpiresByType application/pdf "access plus 1 year" ExpiresByType application/x-shockwave-flash "access plus 1 year" ExpiresByType image/bmp "access plus 1 year" ExpiresByType image/gif "access plus 1 year" ExpiresByType image/jpeg "access plus 1 year" ExpiresByType image/png "access plus 1 year" ExpiresByType image/svg+xml "access plus 1 year" ExpiresByType image/tiff "access plus 1 year" ExpiresByType image/x-icon "access plus 1 year" ExpiresByType text/css "access plus 1 year" ExpiresByType video/x-flv "access plus 1 year" # Deal with JavaScript MIME type issues ExpiresByType application/javascript "access plus 1 year" ExpiresByType application/x-javascript "access plus 1 year" ExpiresByType text/ecmascript "access plus 1 year" ExpiresByType text/javascript "access plus 1 year" # Probably better to explicitly declare MIME types than to have a blanket rule for expiration # Uncomment below if you disagree #ExpiresDefault "access plus 1 year" </IfModule> #------------------------------------------------------------------------------- # Caching #------------------------------------------------------------------------------- <IfModule mod_headers.c> <FilesMatch "\.(bmp|css|flv|gif|ico|jpg|jpeg|js|pdf|png|svg|swf|tif|tiff)$"> Header add Cache-Control "public" Header unset ETag Header unset Last-Modified FileETag none </FilesMatch> </IfModule>

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

  • debugging JBoss 100% CPU usage

    - by NateS
    Originally posted on Server Fault, where it was suggested this question might better asked here. We are using JBoss to run two of our WARs. One is our web app, the other is our web service. The web app accesses a database on another machine and makes requests to the web service. The web service makes JMS requests to other machines, aggregates the data, and returns it. At our biggest client, about once a month the JBoss Java process takes 100% of all CPUs. The machine running JBoss has 8 CPUs. Our web app is still accessible during this time, however pages take about 3 minutes to load. Restarting JBoss restores everything to normal. The database machine and all the other machines are fine, only the machine running JBoss is affected. Memory usage is normal. Network utilization is normal. There are no suspect error messages in the JBoss logs. I have set up a test environment as close as possible to the client's production environment and I've done load testing with as much as 2x the number of concurrent users. I have not gotten my test environment to replicate the problem. Where do we go from here? How can we narrow down the problem? Currently the only plan we have is to wait until the problem occurs in production on its own, then do some debugging to determine the cause. So far people have just restarted JBoss when the problem occurred to minimize down time. Next time it happens they will get a developer to take a look. The question is, next time it happens, what can be done to determine the cause? We could setup a separate JBoss instance on the same box and install the web app separately from the web service. This way when the problem next occurs we will know which WAR has the problem (assuming it is our code). This doesn't narrow it down much though. Should I enable JMX remote? This way the next time the problem occurs I can connect with VisualVM and see which threads are taking the CPU and what the hell they are doing. However, is there a significant down side to enabling JMX remote in a production environment? Is there another way to see what threads are eating the CPU and to get a stacktrace to see what they are doing? Any other ideas? Thanks!

    Read the article

  • i had problem in adding the additional content in my pdf

    - by Ayyappan.Anbalagan
    I am converting my data set into a pdf document.My data set contains the product bill details.So,at the top of the pdf i need to added some more content like "my company name & address customer name, date of bill,bill no" Below code i am using to convert into pdf. public static void Exportdata(DataTable dataTable, HttpResponse Response, int val) { //String filename = String.Concat(name, "-", DateTime.Today.Day.ToString(), "/", DateTime.Today.Month.ToString(), "/", DateTime.Today.Year.ToString(), ".pdf"); Document pdfDoc = new Document(PageSize.A4, 30, 30, 40, 25); System.IO.MemoryStream mStream = new System.IO.MemoryStream(); PdfWriter writer = PdfWriter.GetInstance(pdfDoc, mStream); //int cols = 0; //int rows = 0; int cols = dataTable.Columns.Count; int rows = dataTable.Rows.Count; pdfDoc.Open(); iTextSharp.text.Table pdfTable = new iTextSharp.text.Table(cols, rows); pdfTable.BorderWidth = 1; pdfTable.Width = 100; pdfTable.Padding = 1; pdfTable.Spacing = 1; //creating table headers for (int i = 0; i < cols; i++) { Cell cellCols = new Cell(); Font ColFont = FontFactory.GetFont(FontFactory.HELVETICA, 8, Font.BOLD); Chunk chunkCols = new Chunk(dataTable.Columns[i].ColumnName, ColFont); cellCols.Add(chunkCols); pdfTable.AddCell(cellCols); } //creating table data (actual result) for (int k = 0; k < rows; k++) { for (int j = 0; j < cols; j++) { Cell cellRows = new Cell(); Font RowFont = FontFactory.GetFont(FontFactory.HELVETICA, 6); Chunk chunkRows = new Chunk(dataTable.Rows[k][j].ToString(), RowFont); cellRows.Add(chunkRows); pdfTable.AddCell(cellRows); } } pdfDoc.Add(pdfTable); pdfDoc.Close(); Response.ContentType = "application/octet-stream"; if (val == 1) { Response.AddHeader("Content-Disposition", "attachment; filename=Users.pdf"); } else if (val == 2) { Response.AddHeader("Content-Disposition", "attachment; filename=Customers.pdf"); } else if (val == 3) { Response.AddHeader("Content-Disposition", "attachment; filename=Materials.pdf"); } else { Response.AddHeader("Content-Disposition", "attachment; filename=Reports.pdf"); } Response.Clear(); Response.BinaryWrite(mStream.ToArray()); //Response.Write(mStream.ToString()); HttpContext.Current.ApplicationInstance.CompleteRequest(); Response.End(); }

    Read the article

  • Occasional Date or timezone discrepancy in hudson or maven with jodatime

    - by TheStijn
    hi, I hope following explanation will make sense because it's a weird problem we're facing and hard to describe. We have a maven project which gets build in hudson and that contains some unit tests where dates are used and asserted. The hudson server runs on solaris. Now, occasionally (like 30% of the times) the unit tests using dates fail because 3,5 hours are deducted from the specified time in the unit test and hence asserts start failing. The other 70% everything works fine although nothing at all changed in the code and we run the hudson job several times an hour. I add following code to a unittest to check the time: @Test public void testDate() { System.out.println("new DateMidnight(2011, 1, 5).toDate();"); System.out.println(new DateMidnight(2011, 1, 5).toDate()); System.out.println(new DateMidnight(2011, 1, 5).toDate().getTime()); Calendar cal = Calendar.getInstance(); cal.set(Calendar.YEAR, 2011); cal.set(Calendar.MONTH, 0); cal.set(Calendar.DAY_OF_MONTH, 5); cal.set(Calendar.HOUR, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); System.out.println("cal.getTime();"); System.out.println(cal.getTime()); System.out.println(cal.getTime().getTime()); } So basically it should print the same thing when using jodatime or plain old Calendar. This is the case in 70% of the runs; for the other 30% I get following printouts: Running TestSuite new DateMidnight(2011, 1, 5).toDate(); Tue Jan 04 21:30:00 MET 2011 1294173000000 cal.getTime(); Wed Jan 05 12:00:00 MET 2011 1294225200000 Local maven tests never appear the pose this problem and we can't figure out what could be the cause of it. Especially, we can't think of a single reason why the tests sometimes pass and sometimes fail without changing any code nor hudson or server setting. Also, we run the maven install with cobertura which means that the unit tests are run twice. It happens also that they pass the first time and fail the second time or the other way around or that they fail both times. Thanks for any help, Stijn

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Offline backup synchronization

    - by Pavan Kumar
    There is a Central Server running Windows Server 2003 and SQL Server 2005 and there are 7 client machines situated in various places and has XP Pro & SQL Server 2005 installed in all of them. They are not interconnected so they are physically seperate. One person goes to each of these centers maybe twice a month and takes the backup (Full database consisting of mdf and ldf files) with a pen drive and brings it to the Central server which contains the central database holding same schema as all the other client databases. I need to synchronize each backup database (belonging to different centers) one by one to update the existing data or inserting new data in the central database . The solution i got was Replication. The pendrive is brought to central server consisting of 7 instances of the databases and then the databases is attached to the central server one by one to the same SQL Server where the central database exists. Then my idea was to replicate the backup database one by one i.e using single subscription (Central Database) and multiple publication ( i.e 7 instances of databases in my case) toplogy by performing replication locally (i.e in the same machine). So i tried to develop a UI in C# .Net to programatically run the Transactional Replication with push subscription using RMO Programming (which is incomplete as of now because there is no point in developing when you already know it is not the solution). Transactional Replication can either be set to initialize with a snapshot or without a snapshot. If i go for the first option i.e with a snapshot , the data whatever is present in Central Database is overwritten by the new data . So the data present initially in the central database is lost. If i try to initialize without snapshot , no data (the data already has the updated and new data) will be sent from the backup database to server. The replication will work in a scenario where any incremental changes is done only after you set the replication . So the initial data whatever was present in the backup database when setting up the replication will not be replicated when running the snapshot agent for the first time to synchronize. Only changes in the backup database thereafter will be reflected to the central database .(Remember I am not going to insert new data or make any changes to the backup database after i attach it to the Central Server. ) So this solution is not feasible. I want a solution for synchronizing from one client database to central database present in the same machine using C#.NET. If you can provide me small example maybe with two databases(with same schema) DB1(Client) to DB2(Server) consisting of one or two tables it will be very helpful. The synchronization is not bidirectional.I want to only update existing data or insert new data from DB1 to DB2 (DB2 may contain some data initially). Thanks and Regards Pavan

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >