Search Results

Search found 1492 results on 60 pages for 'tim mahy'.

Page 13/60 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • June LCNUG Presentation

    - by Tim Murphy
    Office Open XML has been my focus for the last 8 months.  We are creating  solutions that generate data and business rule heavy presentations and document.  On June 24th I will be covering the how to use OOXML to generate documents that can be used as sales and marketing collateral.  Register below and come out and join the discussion. http://www.eventbrite.com/event/722041646 del.icio.us Tags: Office Open XML,OOXML PSC Group,LCNUG,Document Generation

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • October 2012 Chicago IT Architects Group Meeting Recap

    - by Tim Murphy
    It seemed very ironic that the day we have a presentation on the architecture of building applications for Windows 8 the Surface tablet is opened for pre-order.  Tom Benton started the evening enlightening the attendees on the user experience for those who had not seen it yet.  He even passed around his table from last year’s Build conference for everyone to play with.  This was followed with a tour of the capabilities and structures that make up a Windows Store App on Windows 8.  Taking it to its conclusion, he rounded out the discussion by covering the certification and deployment process. As usual it was great to see a lot of familiar faces last night.  We are always looking for more people to join in our discussions.  Stay tuned here for announcements up upcoming meetings and topics.  Also, if you have a topic you would like to present or see presented feel free to contact me through this blog. del.icio.us Tags: Chicago Information Technology Architects Group,CITAG,Winodws 8,Windows Store,Tom Benton

    Read the article

  • Not able to suspend or hibernate

    - by Tim
    My Ubuntu 10.10 on my laptop Lenovo T400 is not able to suspend or hibernate. Whenever I click Suspend or Hibernate, the moon LED on the bottom of the lid flashes a few seconds, the screen quickly shows something like "some devices fail to suspend, error 5", and then the moon LED goes off and the display still has ambient light illumination. I suppose in suspend or hibernation state, the display should have no illumination, just like when the laptop is turned off, right? If I press any key, the unlock screen dialogue will pop out. I searched a little on the internet, and installed 'acpi-support' according to some advice but it does not help. Any suggestions to solve this problem? Thanks and regards! ADDED: Laptop specifications: CPU Intel Mobile Core 2 Duo P8800 @ 2.66GHz Penryn 45nm Technology RAM 1.9GB Single-Channel DDR3 @ 532MHz (7-7-7-20) Motherboard LENOVO 2764CTO (None) Graphics ThinkPad Display 1440x900 @ 1440x900 ATI Mobility Radeon HD 3400 Series (Lenovo) Hard Drives 244GB Western Digital WDC WD2500BEVS-08VAT2 (SATA) Optical Drives HL-DT-ST DVDRAM GSA-U20N AZCDW EFCPUZ452 SCSI CdRom Device AZCDW EFCPUZ452 SCSI CdRom Device Audio Conexant 20561 SmartAudio HD

    Read the article

  • Setting Gmail as mail server

    - by Tim S.
    I’m in a slightly weird situation right now, and I don’t have sufficient knowledge to sort this myself without truly understand what I’m doing. Yesterday, I’ve registered a domain (.com) and ordered a VPS, attached to that domain. Chances are I may receive mail on my .com address to confirm the domain. Unfortunately, that domain is nothing, but an empty domain. Currently, there’s no mailserver that fetches my mail. Because I don’t have a mailserver available, I (temporarily) want to use Gmail. I prefer to add it to my existing, personal address, but I’m okay with creating a new account as well. I just want to read possible incoming mails. I’ve tried to set MX records to What do I need to do to get mail to a Gmail address? PS. I’m aware of Google, NSA, etc. PPS. I just want to receive mail. I don’t care if I can’t send via my domain. PPS. Detailed steps would be greatly appreciated, I’m a noob.

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • How can I inform search engines that the usefulness of some content on my site has a limited shelf life?

    - by Tim Post
    Let's say that I run a forum dedicated to computer hardware. Naturally, people are going to ask questions like: What is the best laptop for running [os] Or What is the best video card for under [amount] These may be perfectly fine discussions, but the content loses usefulness over time. An answer to either question asked in 2007 might still be relevant in 2008, but definitely not in 2012. Is there a way that I can tell search engines that certain pages might not give visitors what they're looking for after a certain date, and perhaps hint to a page on my site that would provide good information? Perhaps something I could set in HTTP response headers, meta tags or even a site map?

    Read the article

  • Is there a Visual Studio style tool/IDE?

    - by Tim
    I have been developing in the windows space with Visual Studio for a while now with work, but I have also been using Ubuntu for a while and am keen to get into some software development for linux. I should also note. I am not looking for .NET and I am aware of mono. I am also familiar with c++ development and some python, so the language isn't so much relevant as the "all in one" aspect. I was interested to know if there is a useful all in one code/debug/design(gui) IDE similar to something like Visual Studio but for linux?

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • chrome download interrupted leaving ".crdownload" temporary file

    - by Tim
    I am using Google Chrome 15.0.874.121 in Ubuntu 10.10. It was fine until recently. Whenever download a file, it always reports "Interrupted", but it actually finishes the downloading, leaving an intermediate file with extension ".crdownload". If removed the extension, the file will be perfect. Note that downloading in Firefox works fine at the same time. So I wonder if it is a bug or how I can fix it? Thanks and regards!

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • Order of partitions for root, home and swap with respect to Windows partitions

    - by Tim
    I am installing Ubuntu on the same hard drive as Windows 7. The partitions of Windows 7 have already occupied the left part of the hard drive. I was wondering how to arrange the order of partitions of root, home and swap, i.e. which is on the left just besides one Windows partition, which is in the middle and which is on the far right? Is there some consideration regarding about this arrangement? Thanks and regards!

    Read the article

  • Setup shortcut keys not working

    - by Tim
    In my Ubuntu 12.04, in keyboard settings, I didn't find a shortcut key for restarting X, so in "Customer Shorcut", I set up Ctrl+Alt+Backspace for command sudo restart lightdm. But after that the shortcut doesn't work. Is it because it requires root privilege? Also I have a SysRq key on my keyboard, which I think to be the "Magic SysReq Key". My SysRq key is shared with PrtSc key (for screen shoot), and is in blue which means I have to press Fn key at the same time to invoke SysRq instead of PrtSC. But every time I press Fn+SysRq, it always shoots a photo of the screen, same as just hitting PrtSc i.e. without hitting Fn. I wonder how to use the Magic SysReq Key? Does it mean the shortcut has not been linked to any command that is supposed for Magic SysReq Key yet? PS: My laptop is Lenovo T400 and OS is Ubuntu 12.04. Thanks!

    Read the article

  • Server for online browser game

    - by Tim Rogers
    I am going to be making an online single player browser game. The online element is needed so that a player can login and store the state of their game. This will include things like what buildings have been made and where they have been positioned as well as the users personal statistics and achievements. At this point in time, I am expecting all of the game logic to be performed client side So far, I am thinking I will use flash for creating the client side of the game. I am also creating a MySQL database to store all the users information. My question is how do I connect the two. Presumably I will need some sort of server application which will listen for incoming requests from any clients, perform the SQL query and then return the data. Does anyone have any recommendations of what technology/language to use?

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • how does server communication work in a flash game with a php backend

    - by Tim Rogers
    I am trying to create a browser game using actionscript/flash. Currently, I'm trying to understand how I would go about creating a back-end which interfaced with my MySQL database. As far as I understand, If I create a php file on a webserver called test.php and then navigate to a webpage hosted on the server eg. www.example.com/test, the php script will run and display the result in my browser. This would use http. Is this how communication between client and server usually works in a flash game? for example, if the game needed to query the db. Would actionscript have to essentially invoke the url of the php script that would execute the query? it could then parse the data and use it. If this is the case, then is JSON considered a good way to transfer data over http?

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • Wireless drops on HP ENVY dv6 with RT3290 wireless, worked without problem prior to upgrading to Ubuntu 13.10, can it be fixed?

    - by Tim
    I have a HP ENVY dv6 Notebook PC with an AMD A10 quad core and RT3290 wireless. Since I upgraded from Ubuntu 13.04 to 13.10, the wireless connects, but then drops after a few minutes or longer, whether or not I am running openconnect to get through a VPN. If I attempt to run a remote X client (e.g. remote xterm) it drops. If I don't run an X client, it disconnects after a while, requiring a reload of the driver and reconnect. Wireless info... sudo lshw -c network *-network description: Wireless interface product: RT3290 Wireless 802.11n 1T/1R PCIe vendor: Ralink corp. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 68:94:23:a7:09:cb width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.11.0-12-generic firmware=0.37 ip=192.168.1.115 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:55 memory:f0210000-f021ffff I have successfully built and installed the MediaTek driver with no luck on connecting, then the system hangs on reboot and I have to recover/undo the changes to boot successfully.

    Read the article

  • How to fix AVI index error

    - by Tim
    I try to open an AVI file. The first software I tried is VLC media player. It reports some error about AVI index: This AVI file is broken. Seeking will not work correctly. Do you want to try to fix it? This might take a long time. I chose yes, and it began fixing AVI index and existed when the repair progress bar reaches 20% or so. Then the video started playing and stopped much earlier than when it is supposed to finish. Next I tried to open it in Totem Movie Player, which also stopped earlier at the same place as in VLC player. I tried to play it in GMplayer. Now the entire AVI file can be played from start to finish, but it is impossible to drag playing progress bar while it was possible in VLC player and Totem player. I heard that Avidemux can fix AVI index error, but later discovered it even failed to open the AVI file before it could try to fix the error. So I was wondering how I can fix the AVI index error, or at least drag the playing progress bar in GMplayer? Thanks and regards!

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • UPDATE FOR BI PUBLISHER ENTERPRISE 10.1.3.4.2 NOVEMBER 2011

    - by Tim Dexter
    It's Friday, that means its patch release time. Why do we do this to ourselves, 'we'll release on Friday!' It might 11.59 on Friday but by golly we'll have released on Friday. I can remember a release of BIP years ago that for some reason we went for 12/31 as a release date ... were we mad? I seem to remember we made it but talk about ridiculous pressure! The latest 10g rollup is out in the wild and available from Oracle support. A bug fixing rollup but worth getting to and know that support will want you to get to it and re-test before going forward on an SR. One simple but very useful fix or enhancement:[Cause of the bug] @ ================== @ Customer reports that despite the clock being shown, end users are clicking @ on the View button repeatedly as the initial generation is taking some time.   @ If the button were to be grayed out then  this would prevent the users @ requesting the report more than  once.  Repeated requests are causing a @ system overload and as this is their Production  instance this is extremely @ important to the customer. @ . @ [The Fix] @ ========= @ Added the logic to disable the button after the user clicks on the "view" @ button and re-enable it when the report is loaded. I told a group of customers once that they have a headache and we have a non-steroidal anti-inflammatory drug, alright, I actually said 'aspirin'. This little gem of a fix helps relieve another little headache that our aspirin was causing. The patch number for all this BIP pain killing is 13399232, enjoy!

    Read the article

  • TechEd 2012: Recap

    - by Tim Murphy
    TechEd this week was a great experience and I wanted to wrap it up with a summary post. First let me say a thank you to John and Jeff from GWB for supplying power, connectivity and a place to work in between sessions.  The blogging hub was a great experience in itself.  Getting to talk with other bloggers and other conference goers turned into a series of interesting conversations.  And where else can you almost end up in the day 1 highlights video? The sessions at TechEd were a mixed bag of value.  The Keynotes rocked, both figuratively and literally and most of the sessions that I want to were a good experience and had gems of information to take away.  There were a few exceptions though.  A couple of the sessions turned out to be sales jobs.  Nothing turns me off more than that (there will be some really honest comments on those surveys). TechEd re-enforced for me that much of the value is not in the sessions, but in the networking opportunities. I got to talk with several Microsoft team members and MVPs as well as some of the vendor representative for companies like Inrule and ComponentOne. Also got to expand both my local and extended community with discussions at meal times and waiting for sessions to start. I think this is one of the benefits that a lot of people don’t take advantage of in these conferences that should be a bigger part of the advertising. Exposure to a wide variety of topics, many of which I had not been able to make time for up to this point was envigorating.  The list of topic includes: Office 365, Windows Server 2012, Windows 8, Metro, Azure.  I can’t wait to get back to work and dig into these subjects in more depth. The one complaint that I had and heard from other attendees was that there weren’t enough sessions that were actually about development.  I realize that TechEd started as an event for IT Pros, but there needs to be more value for the Devs.  It all went by too fast and it will take a couple more days to digest the material, but the batteries are and I’m ready to leverage what I’ve learned.  Hopefully we will do it again next year. del.icio.us Tags: TechEd,TechEd 2012

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >