Search Results

Search found 10515 results on 421 pages for 'automatically'.

Page 372/421 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Using Private Extension Galleries in Visual Studio 2012

    - by Jakob Ehn
    Note: The installer and the complete source code is available over at CodePlex at the following location: http://inmetavsgallery.codeplex.com   Extensions and addins are everywhere in the Visual Studio ALM ecosystem! Microsoft releases new cool features in the form of extensions and the list of 3rd party extensions that plug into Visual Studio just keeps growing. One of the nice things about the VSIX extensions is how they are deployed. Microsoft hosts a public Visual Studio Gallery where you can upload extensions and make them available to the rest of the community. Visual Studio checks for updates to the installed extensions when you start Visual Studio, and installing/updating the extensions is fast since it is only a matter of extracting the files within the VSIX package to the local extension folder. But for custom, enterprise-specific extensions, you don’t want to publish them online to the whole world, but you still want an easy way to distribute them to your developers and partners. This is where Private Extension Galleries come into play. In Visual Studio 2012, it is now possible to add custom extensions galleries that can point to any URL, as long as that URL returns the expected content of course (see below).Registering a new gallery in Visual Studio is easy, but there is very little documentation on how to actually host the gallery. Visual Studio galleries uses Atom Feed XML as the protocol for delivering new and updated versions of the extensions. This MSDN page describes how to create a static XML file that returns the information about your extensions. This approach works, but require manual updates of that file every time you want to deploy an update of the extension. Wouldn’t it be nice with a web service that takes care of this for you, that just lets you drop a new version of your VSIX file and have it automatically detect the new version and produce the correct Atom Feed XML? Well search no more, this is exactly what the Inmeta Visual Studio Gallery Service does for you :-) Here you can see that in addition to the standard Online galleries there is an Inmeta Gallery that contains two extensions (our WIX templates and our custom TFS Checkin Policies). These can be installed/updated i the same way as extensions from the public Visual Studio Gallery. Installing the Service Download the installler (Inmeta.VSGalleryService.Install.msi) for the service and run it. The installation is straight forward, just select web site, application pool and (optional) a virtual directory where you want to install the service.   Note: If you want to run it in the web site root, just leave the application name blank Press Next and finish the installer. Open web.config in a text editor and locate the the <applicationSettings> element Edit the following setting values: FeedTitle This is the name that is shown if you browse to the service using a browser. Not used by Visual Studio BaseURI When Visual Studio downloads the extension, it will be given this URI + the name of the extension that you selected. This value should be on the following format: http://SERVER/[VDIR]/gallery/extension/ VSIXAbsolutePath This is the path where you will deploy your extensions. This can be a local folder or a remote share. You just need to make sure that the application pool identity account has read permissions in this folder Save web.config to finish the installation Open a browser and enter the URL to the service. It should show an empty Feed page:   Adding the Private Gallery in Visual Studio 2012 Now you need to add the gallery in Visual Studio. This is very easy and is done as follows: Go to Tools –> Options and select Environment –> Extensions and Updates Press Add to add a new gallery Enter a descriptive name, and add the URL that points to the web site/virtual directory where you installed the service in the previous step   Press OK to save the settings. Deploying an Extension This one is easy: Just drop the file in the designated folder! :-)  If it is a new version of an existing extension, the developers will be notified in the same way as for extensions from the public Visual Studio gallery: I hope that you will find this sever useful, please contact me if you have questions or suggestions for improvements!

    Read the article

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • Best way to load application settings

    - by enzom83
    A simple way to keep the settings of a Java application is represented by a text file with ".properties" extension containing the identifier of each setting associated with a specific value (this value may be a number, string, date, etc..). C# uses a similar approach, but the text file must be named "App.config". In both cases, in source code you must initialize a specific class for reading settings: this class has a method that returns the value (as string) associated with the specified setting identifier. // Java example Properties config = new Properties(); config.load(...); String valueStr = config.getProperty("listening-port"); // ... // C# example NameValueCollection setting = ConfigurationManager.AppSettings; string valueStr = setting["listening-port"]; // ... In both cases we should parse strings loaded from the configuration file and assign the ??converted values to the related typed objects (parsing errors could occur during this phase). After the parsing step, we must check that the setting values ??belong to a specific domain of validity: for example, the maximum size of a queue should be a positive value, some values ??may be related (example: min < max), and so on. Suppose that the application should load the settings as soon as it starts: in other words, the first operation performed by the application is to load the settings. Any invalid values for the settings ??must be replaced automatically with default values??: if this happens to a group of related settings, those settings are all set with default values. The easiest way to perform these operations is to create a method that first parses all the settings, then checks the loaded values ??and finally sets any default values??. However maintenance is difficult if you use this approach: as the number of settings increases while developing the application, it becomes increasingly difficult to update the code. In order to solve this problem, I had thought of using the Template Method pattern, as follows. public abstract class Setting { protected abstract bool TryParseValues(); protected abstract bool CheckValues(); public abstract void SetDefaultValues(); /// <summary> /// Template Method /// </summary> public bool TrySetValuesOrDefault() { if (!TryParseValues() || !CheckValues()) { // parsing error or domain error SetDefaultValues(); return false; } return true; } } public class RangeSetting : Setting { private string minStr, maxStr; private byte min, max; public RangeSetting(string minStr, maxStr) { this.minStr = minStr; this.maxStr = maxStr; } protected override bool TryParseValues() { return (byte.TryParse(minStr, out min) && byte.TryParse(maxStr, out max)); } protected override bool CheckValues() { return (0 < min && min < max); } public override void SetDefaultValues() { min = 5; max = 10; } } The problem is that in this way we need to create a new class for each setting, even for a single value. Are there other solutions to this kind of problem? In summary: Easy maintenance: for example, the addition of one or more parameters. Extensibility: a first version of the application could read a single configuration file, but later versions may give the possibility of a multi-user setup (admin sets up a basic configuration, users can set only certain settings, etc..). Object oriented design.

    Read the article

  • Oracle????????(2012?11?)

    - by Steve He(???)
      Oracle Support Training Oracle ???????????,????????????,??????,?????Oracle??????????,????????????????????????????????Oracle???????????? ???? ?? ?? ?? ?? ???? ?? Support Best Practices (formerly WEWS) ???? ?? 11?13? 15:00 ?? EBS - Support Diagnostics Tools ???? ?? 11?15? 15:00 ?? OSWatcher Black Box: How to improve performance and monitor your system automatically ???? ?? 11?15? 15:00 ?? MOS - Configuration Manager ???? ?? 11?20? 15:00 ?? Get Proactive Resolve - Answers Generic ???? ?? 11?22? 15:00 ?? MOS - Communities ???? ?? 11?27? 15:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Quick Search ???? ?? ?? Hot Topic Email ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ?????????? Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Internet Explorer keeps asking for NTLM credentials in Intranet zone

    - by Tomalak
    Long text, sorry for that. I'm trying to be as specific as possible. I'm on Windows 7 and I experience a very frustrating Internet Explorer 8 behavior. I'm in a company LAN with some intranet servers and a proxy for connecting with the outside world. On sites that are clearly recognized as being "Local Intranet" (as indicated in the IE status bar) I keep getting "Windows Security" dialog boxes that ask me to log in. These pages are served off an IIS6 with "Integrated Windows Security" enabled, NTFS permits Everyone:Read on the files themselves. If I enter my Windows credentials, the page loads fine. However, the dialog boxes will be popping up the next time, regardless if I ticked "Remember my credentials" or not. (Credentials are stored in the "Credential Manager" but that does not make any difference as to how often these login boxes appear.) If I click "Cancel", one of two things can happen: Either the page loads with certain resources missing (images, styleheets, etc), or it does not load at all and I get HTTP 401.2 (Unauthorized: Logon Failed Due to Server Configuration). This depends on whether the logon box was triggered by the page itself, or a referenced resource. The behavior appears to be completely erratic, sometimes the pages load smoothly, sometimes one resource triggers a logon message, sometimes it does not. Even simply re-loading the page can result in changed behavior. I'm using WPAD as my proxy detection mechanism. All Intranet hosts do bypass the proxy in the PAC file. I've checked every IE setting I can think of, entered host patterns, individual host names, IP ranges in every thinkable configuration to the "Local Intranet" zone, ticked "Include all sites that bypass the proxy server", you name it. It boils down to "sometimes it just does not work", and slowly I'm losing my mind. ;-) I'm aware that this is related to IE not automatically passing my NTLM credentials to the webserver but asking me instead. Usually this should only happen for NTLM-secured sites that are not recognized as being in the "Intranet" zone. As explained, this is not the case here. Especially since half of a page can load perfectly and without interruption and some page's resources (coming from the same server!) trigger the login message. I've looked at http://support.microsoft.com/kb/303650, which gives the impression of describing the problem, but nothing there seems to work. And frankly, I'm not certain if "manually editing the registry" is the right solution for this kind of problem. I'm not the only person in the world with an IE/intranet/IIS configuration, after all. I'm at a loss, can somebody give me a hint?

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Edit: I'm updating this to now include my present steps 1) I create a Master Page of my template 2) I add a bunch of text frames where I want the imported data from the XML file to be places 3) I open the "Tags" window and Import and XML file 4) I mark my text frames in the Master Document with the appropriate tags 5) I then add a lot of pages (like 200) to the document 6) Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. So there's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? BTW, here's an example of Adobe's great documentation for merging repeated XML elements. There's gotta be more...InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data EDIT: Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure

    Read the article

  • Creating a dynamic lacp trunk from HP Procurve 2412zl to Proliant DL380 G7

    - by Maalobs
    I'm configuring an IEEE 802.3ad (LACP) dynamic trunk from a HP Procurve 2412zl (firmware version K.15.07) switch to a HP Proliant DL380 G7 server. The DL380 has 4 NICs and is running Win2008 R2, so I'm teaming the NICs together and leaving everything on the recommended "automatic" setting in the HP NIC configuration tool. The server is one of two, they'll be connected on interfaces F17-F20 and F21-F24 respectively on the switch. I need the servers in a separate VLAN, here is the configuration for the VLAN: vlan 10 name "Lab_Mgmt" untagged B2,F17-F24 ip address 172.22.71.3 255.255.255.0 tagged B21 exit There is a DHCP-relay into the VLAN 10 from another device beyond interface B21. The Advanced Traffic Management Guide says that in order to run a dynamic LACP trunk on another VLAN besides the DEFAULT_VLAN, you need to first enable GVRP and then use "forbid" to stop the interfaces from automatically joining DEFAULT_VLAN when the dynamic trunk is created. GVRP brings some other stuff with it that I don't want or need, so I disable it with "unknown-vlans disable" on all other interfaces. Here is how I do it: procurve-5412zl-1(config)# gvrp procurve-5412zl-1(config)# interface A1-A24,B1-B24,C1-C24,D1-D10,D13-D24,E1-E24, F1-F16,K1,K2 unknown-vlans disable procurve-5412zl-1(config)# vlan 1 forbid F17-F24 procurve-5412zl-1(config)# interface F17-F20 lacp active The result afterwards looks all successful: procurve-5412zl-1(config)# show trunks Load Balancing Method: L3-based (Default), L2-based if non-IP traffic Port | Name Type | Group Type ---- + -------------------------------- --------- + ------ -------- F17 | XYZTEAM3_NIC1 100/1000T | Dyn2 LACP F18 | XYZTEAM3_NIC2 100/1000T | Dyn2 LACP F19 | XYZTEAM3_NIC3 100/1000T | Dyn2 LACP F20 | XYZTEAM3_NIC4 100/1000T | Dyn2 LACP procurve-5412zl-1(config)# vlan 10 procurve-5412zl-1(vlan-10)# show lacp LACP LACP Trunk Port LACP Admin Oper Port Enabled Group Status Partner Status Key Key ---- ------- ------- ------- ------- ------- ------ ------ F17 Active Dyn2 Up Yes Success 0 0 F18 Active Dyn2 Up Yes Success 0 0 F19 Active Dyn2 Up Yes Success 0 0 F20 Active Dyn2 Up Yes Success 0 0 On the Proliant server, the NIC configuration Tool is also indicating that a 802.3ad dynamic trunk has been established. Everything should be good, but it isn't. The server is not getting an IP-address from the DHCP, which it does if I'm not enabling LACP. If I configure the server to a static IP-address on the VLAN 10 subnet, it can't even ping the switch IP-address, much less anything outside of the VLAN. The switch can't ping the server either. I did another attempt with F17-F20 tagged, and checking the box "Default Native Tag (VLAN 10)" in the NIC configuration tool on the server, but there was no difference. Does anyone have any idea what I might have missed?

    Read the article

  • Is there a way to route all traffic from Android through a proxy/tunnel to my Tomato router?

    - by endolith
    I'd like to be able to connect my Android phone to public Wi-Fi points with unencrypted connections, but People can see what I'm doing by intercepting my radio transmissions People who own the access point can see what I'm doing. There are tools like WeFi and probably others to automatically connect to access points, but I don't trust random APs. I'd like all my traffic to go through an encrypted tunnel to my home router, and from there out to the Internet. I've done such tunnels from other computers with SSH/SOCKS and PPTP before. Is there any way to do this with Android? I've asked the same question on Force Close, so I'll change this question to be about both sides of the tunnel. More specifically: My phone now has CyanogenMod 4.2.3 My router currently has Tomato Version 1.25 I'm willing to change the router firmware, but I was having issues with DD-WRT disconnecting, which is why I'm using Tomato. Some possible solutions: SSH with dynamic SOCKS proxy: Android supposedly supports this through ConnectBot, but I don't know how to get it to route all traffic. Tomato supports this natively. I've been using this with MyEntunnel for my web browsing at work. Requires setting up each app to go through the proxy, though. PPTP: Android supports this natively. Tomato does not support this, unless you get the jyavenard mod and compile it? I previously used PPTP for web browsing at work and in China because it's native in Windows and DD-WRT. After a while I started having problems with it, then I started having problems with DD-WRT, so I switched to the SSH tunnel instead. Also it supposedly has security flaws, but I don't understand how big of a problem it is. IPSec L2TP: Android (phone) and Windows (work/China) both support this natively I don't know of a router that does. I could run it on my computer using openswan, but then there are two points of failure. OpenVPN: CyanogenMod apparently includes this, and now has an entry to create a new OpenVPN in the normal VPN interface, but I have no idea how to configure it. TunnelDroid apparently handles some of this. Future versions will have native support in the VPN settings? Tomato does not support this, but there are mods that do? I don't know how to configure this, either. TomatoVPN roadkill mod SgtPepperKSU mod Thor mod I could also run a VPN server on my desktop, I guess, though that's less reliable and presumably slower than running it in the router itself. I could change the router firmware, but I'm wary of more fundamental things breaking. Tomato has been problem-free for the regular stuff. Related: Anyone set up a SSH tunnel to their (rooted) G1 for browsing?

    Read the article

  • Windows 7 is shutting down unexpectedly, according to the logs.

    - by dlamblin
    Here's a message from my eventvwr EventLog (Windows Logs System): The previous system shutdown at 11:51:15 AM on ?7/?29/?2009 was unexpected. This is funny because I was wondering why the system shut down while I was playing Civilizations IV full screen. Now I know. It was unexpected. Has anyone encountered and resolved this? A little background: I am running Windows 7 RC inside VMWare Fusion 2 (just updated a few months back) on a MacBook (Bitterly not Pro) aluminum-body. Windows 7 occasionally will shut down. This isn't a quick turn-off, it's a shutdown where all the programs are exited, the system waits until they quit (and Civ4 doesn't prompt me to save), it even installed Windows Updates before restarting. And yes it is restarting right after the shutdown. Because I run a game in full screen mode I do not notice any dialog with a countdown timer or anything like that that might be a warning. As I have iStat on my dashboard widgets I can see about 8 temperature monitors. I have seen the CPU get up to 74C before, but during the shutdown, though it seemed hot to the touch (always is), it read 61C for the CPU, 60C for heatsink A, 50C for heatsink B and in the 30s-40s for the enclosure and harddrives. As I type this now, the temps are actually higher, so I don't think the temperature caused it. I have at least six such events dating first from 5/17 which was a week after installing Windows 7. I did find one information level warning from USER32 in the system log that says: The process C:\Windows\system32\svchost.exe (DLAMBLIN-WIN7) has initiated the restart of computer DLAMBLIN-WIN7 on behalf of user NT AUTHORITY\SYSTEM for the following reason: Operating System: Recovery (Planned) Reason Code: 0x80020002 Shutdown Type: restart Comment: And another 15 minutes before that from Windows Update: Restart Required: To complete the installation of the following updates, the computer will be restarted within 15 minutes: - Cumulative Security Update for Internet Explorer 8 for Windows 7 Release Candidate for x64-based Systems (KB972260) Which I think kind of explains it. Though I don't know why restarting after an update would create an error event of "shutdown was unexpected", isn't that pretty odd? Now, how do I set it to never restart after an update unless I click something. Application of solution: As fretje reminded me, there's a couple of configurable settings for this, in windows 7 they're much in the same place as in Windows 2000 SP3 and XP SP1. Running gpedit.msc pops up a window that looks like: Windows 7 has changed the order and added a couple of newer options I've italicized: Do not display 'Install Updates and Shut Down' in Shut Down Windows dialog box Do not adjust default option to 'Install Updates and Shut Down' in Shut Down Windows dialog box Enabling Windows Power Management to automatically wake up the system to install scheduled updates Configure Automatic Updates Specify intranet Microsoft update service location Automatic Updates detection frequency Allow non-administrators to receive update notifications Turn on Software Notifications Allow Automatic Updates immediate installation Turn on recommended updates via Automatic Updates No auto-restart with logged-on users for scheduled Automatic Updates Re-prompt for restart with scheduled installations. Delay Restart for scheduled installations Reschedule Automatic Updates schedule

    Read the article

  • Prevent Windows 7 User Accounts from accessing files in other User Accounts

    - by Mantis
    I'm trying to set up another User Account on my Windows 7 Professional laptop for use by another person. I do not want that person to have access to any of the files in my User Account on the same machine. This machine has a single hard disk formatted with NTFS. User accounts data is stored in the default location, C:\Users. I use the computer with a Standard Account (not an Administrator). Let's call my user account "User A." I have given the new user a Standard Account. Let's call the new user's account "User B." To be clear, I want User B to have the ability to log in to her account, to use the computer, but to be unable to access any of the files in the User A account on the same machine. Currently, User B cannot use Windows Explorer to navigate to the location C:\Users\User A. However, by simply using Windows Search, User B can easily find and open documents saved in C:\Users\User A\Documents. After opening a document, that document's full path appears in "Recent Places" in Windows Explorer, and the document appears as a file that can be opened using the "Recent" feature in Word 2010. This is not the desired behavior. User B should not have the ability to see any documents using Windows Search or anything else. I have attempted to set permissions using the following procedure. Using an Administrator account, navigate to C:\Users and right-click on the "User A" folder. Select "Properties." In the "User A Properties" window that appears, click the "Security" tab. Click the "Edit..." button to change permissions. IN the "Permissions for User B" window that appears, under "Group or User Names," select User B. Under "Permissions for User B", check the box under the "Deny" column for the "Full Control" row. Ensure that the "Deny" box is automatically checked for all the other rows, and then click "OK." The system should then begin working. The process could take several minutes. When I followed this procedure, I received several "Access Denied" errors, suggesting that the system was unable to set the permissions as I had directed. I think this might be one of the reasons why User B is still able to access files in User A's account folders. Is there any other way I could accomplish my goal here? Thank you.

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • How to set up a mini call center?

    - by Ralph
    I'm trying to figure out how to set up a miniature call center for a small business. Like, for 1-4 people to take calls, but hopefully expandable to more. We want to accept nation-wide calls, and then I guess distribute the calls among the available agents. If no one is available, I guess it should either put them in a queue and play some annoying music for them, or forward to call to an agent who has least-recently taken a call who can then quickly answer and say "please hold" until they're done their call. We want to have one phone number that customers can call. I guess we then need some kind of ACD system which would take each call and forward it to an agent based on some algorithm? Then we would need to purchase a separate phone line for each agent, plus one just for the distributor? Or do we need several "extra" lines to maintain a queue (one for each customer waiting too)? This "ACD" thing, is it just a device that you would plug a phone line into (or several?), and maybe connect to a computer, aided by some software? Or is a subscription thing that I would need from my local telephone provider? Next, the business we're running, the callers will be repeat customers. It would be helpful to automatically pull up their profile based on the incoming number. The "software" our agents will be running will just be a website where they can log in (preferably from home) and then enter some information they would obtain through the call. So, the system would have to somehow interface with the website if possible. If not, we'll just have to ask each customer for an identifier (phone number, username, customer number, or something). Is this possible? I guess each computer would need a device that the call would pass through, and then if I can somehow hook into that, then I can write some software that will interface with the site. So, where do I start? What hardware do we need to buy? What subscriptions do we need? We were thinking this magicJack might help us in accepting long-distance calls for cheaper, but my understanding is that they provide you with some weird-looking number, is there a way we could "mask" it with our toll-free number? And then pass the incoming calls through the distributor system, which would then get passed to the call-accepting device which would both allow an agent to answer the call and have a software hook? (I realize this might be partially out of the scope of SU, but I wasn't sure where else to ask. It is about computer(-aided) hardware and software anyway.) P.S.: I don't need any of that "press 1 to talk to..." or "say xyz to..." junk. Just a straight-forward, connect-to-next-available-agent system.

    Read the article

  • Unable to mount XP share using fs-cifs from Linux

    - by MetalSearGolid
    I have a head unit that runs Linux that is connected to my PC via an Ethernet cable. I have a Windows XP share on this PC that the head unit needs to be able to mount, however, when mounting using the following command, it fails. Here is the command that fails, along with the verbose output: # fs-cifs -vvvvvvvvv -l //CUMBRIA-XP:192.168.1.2:/hnet /mnt/net cifs[2158679-1]: starting... cifs[2158679-1]: user is to input both name & passwd. cifs[2158679-1]: server [192.168.1.2] share [hnet] prefix [/mnt/net] user [nu ll] passwd [null] Welcome: 192.168.1.2(:/hnet) -> /mnt/net Username:headunit cifs[2158679-1]: user name: headunit length 8 cifs[2158679-1]: new server Password: cifs[2158679-1]: establishing connection to (192.168.1.2)CUMBRIA-XP cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 6137bfa2 f2d7803b cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: new packet cifs[2158679-1]: returning: mid 0 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/hnet cifs[2158679-1]: returning: mid 1 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: try upper case share. cifs[2158679-1]: session request: 192.168.1.2:CUMBRIA-XP -> localhost cifs[2158679-1]: negotiating smb dialect cifs[2158679-1]: skey(idx=2): 00000000, challenge:(8), 2d3e910f e3e148c4 cifs[2158679-1]: negotiation: success with dialect=2 cifs[2158679-1]: logging headunit on 192.168.1.2 cifs[2158679-1]: returning: mid 2 status= 0 cifs[2158679-1]: smb_logon successful: dialect 2 enpass 1 cifs[2158679-1]: mounting 192.168.1.2:/HNET cifs[2158679-1]: returning: mid 3 status= 13 cifs[2158679-1]: smb_mount: Bad file descriptor cifs[2158679-1]: mount failed. cifs[2158679-1]: io_mount: smb_connection failed: Bad file descriptor io_mount: Bad file descriptor cifs[2158679-1]: user is to input both name & passwd. fs-cifs: missing arguments, or all mount attempts failed. run "use fs-cifs" or "fs-cifs -h" for help. Any ideas? It is worthy to note that /mnt does not exist on the filesystem, but I was told by the company who gave us these units that fs-cifs should automatically create the /mnt/net folders if they don't exist.

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Ubuntu - Ruby Daemon script creates two processes - sh and ruby - PID file points at sh, not ruby

    - by Jonathan Scoles
    The PID file for a ruby process I have running as a daemon is getting the wrong PID. It appears that running /etc/init.d/sinatra start creates two processes - sh and ruby, and the PID that ends up in the PID file is that of the sh process. This means that when I then run /etc/init.d/sinatra stop or /etc/init.d/sinatra restart, it is killing sh and leaving the ruby process still running. I'd like to know a) why is my script launching two processes - sh and ruby, and not just ruby, and b) how do I fix it to just launch ruby? Details of the setup: I have a small Sinatra server set up on an ubuntu server, running as a daemon. It is set to automatically at server startup run a script named sinatra in /etc/init.d that launches the a control script control.rb, which then runs a ruby daemon command to start the server. The script is run under the 'sinatrauser' account, which has permissions for the directories the script needs. contents of /etc/init.d/sinatra #!/bin/bash # sinatra Startup script for Sinatra server. sudo -u sinatrauser ruby /var/www/sinatra/control.rb $1 RETVAL=$? exit $RETVAL To install this script, I simply copied it to /etc/init.d/ and ran sudo update-rc.d sinatra defaults contents of /var/www/sinatra/control.rb require 'rubygems' require 'daemons' pwd = Dir.pwd Daemons.run_proc('sinatraserver.rb', {:dir_mode => :normal, :dir => "/opt/pids/sinatra"}) do Dir.chdir(pwd) exec 'ruby /var/www/sinatra/sintraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1' end portion of output from ps -A 6967 ? 00:00:00 apache2 10181 ? 00:00:00 sh <--- PID file gets this PID 10182 ? 00:00:02 ruby <--- Actual ruby process running Sinatra 12172 ? 00:00:00 sshd The PID file gets created in /opt/pids/sinatra/sinatraserver.rb.pid, and always contains the PID of the sh instance, which is always one less than the PID of the ruby process EDIT: I tried micke's solution, but it had no effect on the behavior I am seeing. This is the output from ps -A f. This output looks the same whether I use sudo -u sinatrauser ... or su sinatrauser -c ... in the service start script in /etc/init.d. 1146 ? S 0:00 sh -c ruby /var/www/sinatra/sinatraserver.rb >> /var/log/sinatra/sinatraOutput.log 2>&1 1147 ? S 0:00 \_ ruby /var/www/sinatra/sinatraserver.rb

    Read the article

  • Indesign Import XML into Automatic Page generation, data merge

    - by taudep
    I've created some InDesign Pages that I want to use as templates. I've created an XML file with all the appropriate data. I want to merge the XML data with the InDesign page and have a few hundred pages automatically generated. I've been reading online and working with InDesign's "Import XML" features without any luck. The documentation has been pretty poor for me. And Google searches haven't returned much fruitful. Here are my present steps: I create a Master Page of my template I add a bunch of text frames where I want the imported data from the XML file to be places I open the "Tags" window and Import and XML file I mark my text frames in the Master Document with the appropriate tags I then add a lot of pages (like 200) to the document Then I use "Import XML" to try and get the data brought in and filled across all 200 pages. This is where I fail. There's something I'm missing. It might be that InDesign doesn't work as I'm expecting... Does anyone have any good tips for mail-merge like functionality with an XML document and auto-generation of InDesign pages? By the way, here's an example of Adobe's great documentation for merging repeated XML elements. There's got to be more... InDesign CS4 Docs: XML-Importing XML-Working with Repeating Data Here's some of the sample XML, notice the ITEM will repeat. I've also truncated the data in the "desc" tag: <output> <item> <user_name>taude</user_name> <date>2009-02-21</date> <title>Wishful Thinking</title> <desc>Skiing up in Vermont on a beautiful day. This photo of</desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/96104200949a162672e1996.15963073.jpeg</thumbnail> </item> <item> <user_name>taude</user_name> <date>2009-02-22</date> <title>Skiing Self Portrait</title> <desc>I was inspired by ML's self-portrait while </desc> <thumbnail>http://www.blipfoto.com/thumbs/5371/2009/big/color/36547696749a2c5782308e0.91477014.jpeg</thumbnail> </item> </output> Here's what my imported XML looks like with the InDesign Structure:

    Read the article

  • Automating the choice between JPEG and PNG with a script

    - by MHC
    Choosing the right format to save your images in is crucial for preserving image quality and reducing artifacts. Different formats follow different compression methods and come with their own set of advantages and disadvantages. JPG, for instance is suited for real life photographs that are rich in color gradients. The lossless PNG, on the other hand, is far superior when it comes to schematic figures: Picking the right format can be a chore when working with a large number of files. That's why I would love to find a way to automate it. A little bit of background on my particular use case: I am working on a number of handouts for a series of lectures at my unversity. The handouts are rich in figures, which I have to extract from PDF-formatted slides. Extracting these images gives me lossless PNGs, which are needlessly large at times. Converting these particular files to JPEG can reduce their size to up to less than 20% of their original file size, while maintaining the same quality. This is important as working with hundreds of large images in word processors is pretty crash-prone. Batch converting all extracted PNGs to JPEGs is not an option I am willing to follow, as many if not most images are better suited to be formatted as PNGs. Converting these would result in insignificant size reductions and sometimes even increases in filesize - that's at least what my test runs showed. What we can take from this is that file size after compression can serve as an indicator on what format is suited best for a particular image. It's not a particularly accurate predictor, but works well enough. So why not use it in form of a script: I included inotifywait because I would prefer for the script be executed automatically as soon as I drag an extracted image into a folder. This is a simpler version of the script that I've been using for the last couple of weeks: #!/bin/bash inotifywait -m --format "%w%f" --exclude '.jpg' -r -e create -e moved_to --fromfile '/home/MHC/.scripts/Workflow/Conversion/include_inotifywait' | while read file; do mogrify -format jpg -quality 92 "$file" done The advanced version of the script would have to be able to handle spaces in file names and directory names preserve the original file names flatten PNG images if an alpha value is set compare the file size between the temporary converted image and its original determine if the difference is greater than a given precentage act accordingly The actual conversion could be done with imagemagick tools: convert -quality 92 -flatten -background white file.png file.jpg Unfortunately, my bash skills aren't even close to advanced enough to convert the scheme above into an actual script, but I am sure many of you can. My reputation points on here are pretty low, but I will gladly award the most helpful answer with the highest bounty I can set. References: http://www.formortals.com/introducing-cnb-imageguide/, http://www.turnkeylinux.org/blog/png-vs-jpg Edit: Also see my comments below for some more information on why I think this script would be the best solution to the problem I am facing.

    Read the article

  • Coldfusion autorestart

    - by Comcar
    Coldfusion is automatically restarting, a lot. It comes in waves, everything seems fine for a while then the server struggles for a few minutes, restarts a lot then settles down again. I have Fusion Reactor installed, but when CF goes down FR stops logging so it's not really helping. Looking through the archived logs just shows gaps in the logs. These are all the occourances of the phrase "Coldfusion started" today. [root@server2 logs]# grep -i "Coldfusion started" server.log | grep "11/27/12" "Information","main","11/27/12","01:49:35",,"ColdFusion started" "Information","main","11/27/12","01:50:46",,"ColdFusion started" "Information","main","11/27/12","01:52:39",,"ColdFusion started" "Information","main","11/27/12","01:54:08",,"ColdFusion started" "Information","main","11/27/12","01:55:12",,"ColdFusion started" "Information","main","11/27/12","01:56:29",,"ColdFusion started" "Information","main","11/27/12","01:57:36",,"ColdFusion started" "Information","main","11/27/12","01:58:57",,"ColdFusion started" "Information","main","11/27/12","01:59:56",,"ColdFusion started" "Information","main","11/27/12","02:01:38",,"ColdFusion started" "Information","main","11/27/12","02:03:11",,"ColdFusion started" "Information","main","11/27/12","02:04:41",,"ColdFusion started" "Information","main","11/27/12","02:07:53",,"ColdFusion started" "Information","main","11/27/12","02:10:45",,"ColdFusion started" "Information","main","11/27/12","02:11:49",,"ColdFusion started" "Information","main","11/27/12","02:13:09",,"ColdFusion started" "Information","main","11/27/12","02:14:18",,"ColdFusion started" "Information","main","11/27/12","02:15:44",,"ColdFusion started" "Information","main","11/27/12","02:17:06",,"ColdFusion started" "Information","main","11/27/12","02:34:19",,"ColdFusion started" "Information","main","11/27/12","03:01:20",,"ColdFusion started" "Information","main","11/27/12","05:25:59",,"ColdFusion started" "Information","main","11/27/12","06:30:48",,"ColdFusion started" "Information","main","11/27/12","06:36:20",,"ColdFusion started" "Information","main","11/27/12","09:34:07",,"ColdFusion started" "Information","main","11/27/12","09:35:39",,"ColdFusion started" "Information","main","11/27/12","09:36:41",,"ColdFusion started" "Information","main","11/27/12","09:39:15",,"ColdFusion started" "Information","main","11/27/12","09:40:42",,"ColdFusion started" "Information","main","11/27/12","09:42:55",,"ColdFusion started" "Information","main","11/27/12","09:44:23",,"ColdFusion started" "Information","main","11/27/12","09:46:18",,"ColdFusion started" "Information","main","11/27/12","09:47:35",,"ColdFusion started" "Information","main","11/27/12","09:48:53",,"ColdFusion started" "Information","main","11/27/12","09:50:04",,"ColdFusion started" "Information","main","11/27/12","09:51:51",,"ColdFusion started" "Information","main","11/27/12","09:53:05",,"ColdFusion started" "Information","main","11/27/12","09:54:24",,"ColdFusion started" "Information","main","11/27/12","09:55:28",,"ColdFusion started" "Information","main","11/27/12","09:56:38",,"ColdFusion started" "Information","main","11/27/12","09:58:03",,"ColdFusion started" "Information","main","11/27/12","09:59:03",,"ColdFusion started" "Information","main","11/27/12","10:04:37",,"ColdFusion started" "Information","main","11/27/12","12:04:02",,"ColdFusion started" I've been looking at the live server metrics in FR on a second screen all day, the CPU, Memory and requests all seemed fine about 12 midday, then the server rebooted. Looking at the logs for the hour between 9am and 10am (more than 15 restarts in the hour), the CPU never went over 44% usage and the Memory never exceeded 53% usage - in the recorded stats at least. There is no JDBC tracking at the moment, so I'll add that to tracking and see if it's MySQL causing a problem, but can anyone help me narrow down the problem, what would cause Cold Fusion to auto restart, and I'm assuming the auto restart is only happening because Fusion Reactor is installed. It's a Red Hat 5 LAMP stack running Coldfusion 9 and Fusion Reactor 4.5.2

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • Properly Configured Rsyslog on CentOS

    - by Gaia
    I'm trying to configure Rsyslog 5.8.10 on CentOS 6.4 to send Apache's error and access logs to a remote server. It's working, but I have a couple questions. A) I would like to use as few queues (and resources) as possible. I send error logs to server A, send access logs to server A, send both logs in one stream to server B. Should I specify one queue per external service (2 queues) or one queue per stream (3 queues, as I have now)? This is what I have: $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyaccessqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-access' then @@logs-01.loggly.com:514;logglyaccess $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName logglyerrorsqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 if $syslogtag startswith 'www-errors' then @@logs-01.loggly.com:514;logglyerrors $DefaultNetstreamDriverCAFile /etc/syslog.papertrail.crt # trust these CAs $ActionSendStreamDriver gtls # use gtls netstream driver $ActionSendStreamDriverMode 1 # require TLS $ActionSendStreamDriverAuthMode x509/name # authenticate by hostname $ActionResumeInterval 10 $ActionQueueSize 100000 $ActionQueueDiscardMark 97500 $ActionQueueHighWaterMark 80000 $ActionQueueType LinkedList $ActionQueueFileName papertrailqueue $ActionQueueCheckpointInterval 100 $ActionQueueMaxDiskSpace 1g $ActionResumeRetryCount -1 $ActionQueueSaveOnShutdown on $ActionQueueTimeoutEnqueue 10 $ActionQueueDiscardSeverity 0 *.* @@logs.papertrailapp.com:XXXXX;papertrailstandard & ~ B) Does a queue block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? C) How do I reset a queue block so that an upcoming send action does not use a queue at all? D) Does a TLS block get used over and over by every send action that follows it or only by the first one or only until it encounters a send followed by a discard action (~)? E) How do I reset a TLS block so that an upcoming send action does not use TLS at all? F) If I run rsyslog -N1 I get: rsyslogd -N1 rsyslogd: version 5.8.10, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: WARNING: rsyslogd is running in compatibility mode. Automatically generated config directives may interfer with your rsyslog.conf settings. We suggest upgrading your config and adding -c5 as the first rsyslogd option. rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad immark rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: MarkMessagePeriod 1200 rsyslogd: Warning: backward compatibility layer added to following directive to rsyslog.conf: ModLoad imuxsock rsyslogd: End of config validation run. Bye. Where do I place the -c5 so that it doesnt run in compatibility mode anymore?

    Read the article

  • custom route not working on windows

    - by Michael Closson
    My windows laptop is directly connected to 192.168.1.0/24 (wireless lan). I access 10.21.0.0/16 though a router that is connected to both networks. The routing works fine with this configuration. I have a VPN, that connects to 10.0.0.0/8. The VPN network doesn't actually use any IPs in the 10.21.0.0/16 range. So I should be able to configure my routing table to route all the 10.21.0.0/16 IPs through the wireless lan, and all other 10.0.0.0/8 through the VPN. My understanding is that I can do this if the metric for the 10.21.0.0 is lower than that of the 10.0.0.0. The VPN (10.0.0.0) is automatically assigned metric 20. I have manually assigned the WLAN a metric of 1. I manually add an entry to the routing table with this command: route add 10.21.0.0 mask 255.255.0.0 192.168.1.201 metric 1 The route is then assigned a metric of 2 (which is expected). The problem is that it doesn't work. I can't ping any machine on the 10.21.0.0 network. But I can access other stuff on the 10.0.0.0. I can also access stuff on the 192.168.1.0. To debug this i've done the following. Run tcpdump on the router (192.168.1.201). I can verify that no packets for 10.21.0.0 arrive on that interface. Disable iptables on the router. Disable the windows firewall. Run wireshark on my laptop, to try and see which interface the ping requests go to. But I can't see them go anywhere!! The ping command doesn't receive any 'destination unreachable' messages. Here is the relevant section of the routing table. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.201 192.168.1.18 2 10.0.0.0 255.0.0.0 On-link 10.55.44.203 20 10.21.0.0 255.255.0.0 192.168.1.201 192.168.1.18 2

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >