Search Results

Search found 8416 results on 337 pages for 'stop'.

Page 296/337 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Missing Fields and Default Values

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Dealing with Missing Fields and Default Values New fields and new default values are not propagated throughout the list. They only apply to new and updated items and not to items already entered. They are only prospective. We need to be able to deal with this issue. Here is a scenario. The user has an old list with old items and adds a new field. The field is not created for any of the old items. Trying to get its value raises an Argument Exception. Here is another: a default value is added to a field. All the old items, where the field was not assigned a value, do not get the new default value. The two can also happen in tandem – a new field is added with a default. The older items have neither. Even better, if the user changes the default value, the old items still carry the old defaults. Let’s go a bit further. You have already written code for the list, be it an event receiver, a feature receiver, a console app or a command extension, in which you span all the fields and run on selected items – some new (no problem) and some old (problems aplenty). Had you written defensive code, you would be able to handle the situation, including similar changes in the future. So, without further ado, here’s how. Instead of just getting the value of a field in an item – item[field].ToString() – use the function below. I use ItemValue(item, fieldname, “mud in your eye”) and if “mud in your eye” is what I get, I know that the item did not have the field.   /// <summary> /// Return the column value or a default value /// </summary> private static string ItemValue(SPItem item, string column, string defaultValue) {     try     {         return item[column].ToString();     }     catch (NullReferenceException ex)     {         return defaultValue;     }     catch (ArgumentException ex)     {         return defaultValue;     } } I also use a similar function to return the default and a funny default-default to ascertain that the default does not exist. Here it is:  /// <summary> /// return a fields default or the "default" default. /// </summary> public static string GetFieldDefault(SPField fld, string defValue) {     try     {         // -- Check if default exists.         return fld.DefaultValue.ToString();     }     catch (NullReferenceException ex)     {         return defValue;     }     catch (ArgumentException ex)     {         return defValue;     } } How is this defensive? You have trapped an expected error and dealt with it. Therefore the program did not stop cold in its track and the required code ran to its end. Now, take a further step - write to a log (See Logging – a log blog). Read your own log every now and then, and act accordingly. That’s all Folks!

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Blind As a Bat in Multi-Monitor Hell &ndash; Free Program Inside!

    - by ToStringTheory
    If you know me personally, then you probably know that I am going blind thanks to a rare genetic eye disease.  My eye disease has already been a huge detriment to my eyesight.  One of the big things to suffer is my ability to see small things moving fast right in front of me.  On a multi-monitor setup, this makes finding the cursor absolute hell. The Problem I’ll keep this short, as I’ve basically already told you what the problem is.  On my three monitor development computer, I am constantly losing the mouse during the day.  I had used the Microsoft accessibility mousefinder (press CTRL, and it pings around the mouse).  The problem with this is, there is only an effect around 50-100 PX around the mouse, and it is a very light gray, almost unnoticeable.. For someone like me, if I am not looking at the monitor when I click the CTRL button, I have to click it multiple times and dart my eyes back and forth in a futile attempt to catch a glimpse of the action…  I had tried other cursor finders, but none I liked… The Solution So what’s a guy to do when he doesn’t like his options?  MAKE A NEW OPTION…  What else should we as developers do, am I right?  So, I went ahead and made a mousefinder of my own, with 6 separate settings to change the effect.  I am releasing it here for anyone else that may also have problems finding their mouse at times. Some of its features include: Multiple options to change to achieve the exact effect you want. If your mouse moves while it is honing in, it will hone in on its current position. Many times, I would press the button and move my mouse at the same time, and many times, the mouse happened to be at a screen edge, so I would miss it. This program will restart its animation on a new screen if the mouse changes its screen while playing. Tested on Windows 7 x64 Stylish color changing from green to red. Deployed as a ClickOnce, so easy to remove if you don't like it. Press Right CTRL to trigger effect Application lives in notification area so that you can easily reach configuration or close it. To get it to run on startup, copy its application shortcut from its startmenu directory to the “Startup” folder in your startmenu. Conclusion I understand if you don’t download this…  You don’t know me and I don’t know you.  I can only say that I have honestly NOT added any virus’ or malware to the package. Yeah, I know it’s weird Download: ‘ToString(theory) Mousefinder.zip’ CRC32: EEBCE300 MD5: 0394DA581BE6F3371B5BA11A8B24BC91 SHA-1: 2080C4930A2E7D98B81787BB5E19BB24E118991C Finally, if you do use this application - please leave a comment, or email me and tell me what you think of it. Encounter a bug or hashes no longer match? I want to know that too! <warning type=”BadPun”>Now, stop messing around and start mousing around!</warning>

    Read the article

  • Why Are We Here?

    - by Jonathan Mills
    Back in the early 2000s, Toyota had a vision of building the number one best selling minivan in North America. Their current minivan, the Sienna, was small, underpowered, and badly needed help.  Yuji Yokoya was given the job of re-engineering the Sienna. There was just one problem, Yuji, lived in Japan. He did not know the people or places that he would be engineering for. Believe it or not, Japan is nothing like North America. So, what does a chief engineer do in a situation like that? He packed up his team and flew halfway around the world. He made a commitment to drive through every state in the US, every province in Canada, and Mexico. He met the people and drove the roads that the Sienna would be driving. And guess what, what he learned on that trip revolutionized the Sienna. The innovations he made, sent the Sienna to number one. Why? Because he knew who he was building his product for. He knew, why he was there.Let me ask you this, do you know why you are building what you are building? As a member of a product team, can you tell me how your product will be used in the real world? As you are writing code, building test plans, writing stories, or any of the other project tasks, can you picture the face of a person who will be using what you are building? All to often, the answer to those questions is, no. Why is it important? Because, every day, project team members make assumptions. Over a given project, it is safe to say project team members will make thousands of assumptions about what they are doing. And all to often, those assumptions are not quite right. Its not that they are not good at their job, its just that they don’t really know why they are there.So, what to do? First and foremost, stop doing what you are doing. Yes, really. Schedule some time to go visit the people who will be using your product. Don’t invite them to you, go to them. Watch them work. Interact with them. Ask them questions. Maybe even try it out yourself. This serves two purposes. One, It shows them that you care about them. They will be far more engaged in your project if they feel like you care. And nothing says you care more that spending some time. Second, if gives you the proper frame of reference for you work. It gives you something tangible to go back to as you are building your product. As you make the thousands of assumptions that you will make over the life of your project, it gives you something to see in your mind that makes it real to you.Ultimately, setting a proper frame of reference is critical to the overall success of a project. The funny thing is, it really does not even take that long. In most cases, a 2-3 hour session will give you most of what you need to get the right insight. For the project, it will be the best 2 hours you could spend.

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • Do Great Work

    - by user12601034
    Have you ever attended an online conference and actually had a desire to attend all of it?? Yesterday I attended the first day of the Great Work MBA program, sponsored by Box of Crayons and hosted by Michael Bungay Stanier. The topic of the day was “Grounding Yourself,” and the day featured five speakers on five different topics. I have to admit that I started the first session with kind of a “blech” feeling that I didn’t really want to participate, but for some reason I did. So I listened to the first session, and I was hooked. I ended up listening to all of the sessions for the day, and I had some great take-aways from the sessions – my highlights included: The opposite of bravery isn’t fear, it’s settling. In essence, you need to be brave in order to accomplish anything. If you’re settling, you’re not being brave, and your accomplishments will likely be lackluster. Bravery requires confidence and permission. You need to work at being brave by taking small wins, build them up and then take slightly larger risks. Additionally, you need to “claim your own crown.” Nobody in the business world is going to give you permission to be a guru in X – you need to give yourself permission to become a guru in X and then do it. Fall in love with obstacles. Everyone is going to face some form of failure. One way to deal with this is to fall in love with solving the puzzle of obstacles. You don’t have to hit it if you can go around it. Understanding purpose brings out the best in people and the best people. As a leader, drawing in people who are passionate and highly motivated about their work creates velocity for your organization. Being clear about purpose is the first step in doing this. You must own your own story. Everything about you creates a “unique you” that is distinct from everyone else. As you take ownership of this, it becomes part of your strength. It’s not a strength if you’re running away from it. Focus on what’s right. Be aware of your tendency to interpret a situation a certain way and differentiate between helpful and unhelpful interpretations. Three questions for how to think differently: 1) Why? 2) Who says so? 3) What would happen if? These three questions can help you build alternative perspectives and options that can increase resiliency. Even though this first day was focused on “Grounding Yourself,” I see plenty of application in the corporate environment for both individuals and leaders of teams. To apply these highlights to my work environment, I would do the following: Understand the purpose – of my company, of my team and of my role on the team. If I know the purpose, I know what I need to bring to the table to make me, my team and my company successful. Declare your goals…your BEHAGS (big, hairy, audacious goals).Have the confidence to declare what you and/or your team is going to accomplish.Sure, you might have to re-state those goals down the line, but you can learn from that as well. Get creative about achieving your goals.Break down your obstacles by asking yourself what is going to stop you from achieving your goals and then, for each obstacles, ask those three questions:Why?Who says so? What would happen if? Focus on what’s right.I had a manager who asked us to write status reports every week.“Status” consisted of 1) What did I accomplish; 2) What will I accomplish next week; 3) How can my manager help me.The focus on our status report was always “what’s right”(“what’s wrong” was always a conversation at the point in time it was needed). I’m normally a skeptic of online webcasts/conferences, and I normally expect to take away maybe one or two ideas. I’m really glad, however, that I took the time to listen to all of the sessions yesterday, and I hope that my take-aways inspire you to think about how you might do great work also. --

    Read the article

  • 4 Key Ingredients for the Cloud

    - by Kellsey Ruppel
    It's a short week here with the US Thanksgiving Holiday. So, before we put on our stretch pants and get ready to belly up to the dinner table for turkey, stuffing and mashed potatoes, let's spend a little time this week talking about the Cloud (kind of like the feathery whipped goodness that tops the infamous Thanksgiving pumpkin pie!) But before we dive into the Cloud, let's do a side by side comparison of the key ingredients for each. Cloud Whipped Cream  Application Integration  1 cup heavy cream  Security  1/4 cup sugar  Virtual I/O  1 teaspoon vanilla  Storage  Chilled Bowl It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another.Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. If you are looking for something to watch as you veg on the couch in a post-turkey dinner hangover, you might consider watching these how-to videos! And yes, it is perfectly ok to have that 2nd piece of pie

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • Why I switch from Asana.com

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/10/24/why-i-switch-from-asana.com.aspxI used Asana.com from 1-2 years. have nice experience to use it. it’s not so easy. When I started using it it’s make many confusion. Now I switch from it.   When I first time see I really didn’t understand how to make a private list. There is a icon on top click on it and make it private. After doing that I still not sure if this is working. There is a lot of confusion made that time. I discuss too much to figure out small small things. The UI is interesting but so hard to understand.  What I am looking for is just a list that I can hold private. I would like to share it only if I put them shared and put email address of person to hold them same list. Few days ago I see that My Win8 phone have a app that call Microsoft OneNote. The good thing of this MS app is that I can record my voice in the app. If someone want to make a list for future then he just need to say and this can be recorded.  This is awesome when you feel that Mobile keypad is just not so fast as a normal regular keyboard.   Google docs are another good option to handle this thing. Just make a word file and use it. share it with friend with many option. One best thing is this app have very simply UI then any other apps.   One more alternative is https://trello.com which you hear from joel on their blog http://www.joelonsoftware.com/items/2011/09/13.html There are many html5 and browser based, mobile based app. Many of them support multi platform feature. this means you can have them from PC to your Pocket. One good thing we all wanted is offline. if you are not online thing will be saved and push back to server when you will be online.   The biggest problem with some apps are they are attractive easy but hard to learn. Their one feature are not clearly defined what he does. This make frustration and confusion to user. When app are not simple to use people start stop trying to learn it. That’s all the problem I have with asana.com If you don’t want to try anything then what about Sticky Notes that is part of Windows 7. This app are still usable since you can store the text on it. If you know any good app to make a task list that provide access from tablet/mobile then put comment here. In the whole world of app there is a lot of app for doing this same thing differently. I mention few of them here. I hope this is nice to describe it.   Thanks for read my post.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • HTML, JavaScript, and CSS in a NetBeans Platform Application

    - by Geertjan
    I broke down the code I used yesterday, to its absolute bare minimum, and then realized I'm not using HTML 5 at all: <html> <head> <link rel="stylesheet" href="style.css" type="text/css" media="all" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js"></script> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"></script> <script type="text/javascript" src="script.js"></script> </head> <body> <div id="logo"> </div> <div id="infobox"> <h2 id="statustext"/> </div> </body> </html> Here's the script.js file referred to above: $(function(){ var banana = $("#logo"); var statustext = $("#statustext"); var defaulttxt = "Drag the banana!"; var dragtxt = "Dragging the banana!"; statustext.text(defaulttxt); banana.draggable({ drag: function(event, ui){ statustext.text(dragtxt); }, stop: function(event, ui){ statustext.text(defaulttxt); } }); }); And here's the stylesheet: body { background:#3B4D61 repeat 0 0; margin:0; padding:0; } h2 { color:#D1D8DF; display:block; font:bold 15px/10px Tahoma, Helvetica, Arial, Sans-Serif; text-align:center; } #infobox { position:absolute; width:300px; bottom:20px; left:50%; margin-left:-150px; padding:0 20px; background:rgba(0,0,0,0.5); -webkit-border-radius:15px; -moz-border-radius:15px; border-radius:15px; z-index:999; } #logo { position:absolute; width:450px; height:150px; top:40%; left: 30%; background:url(bananas.png) no-repeat 0 0; cursor:move; z-index:700; } However, I've replaced the content of the HTML file with a few of the samples from here, without any problem; in other words, if the HTML 5 canvas were to be needed, it could seamlessly be incorporated into my NetBeans Platform application: https://developer.mozilla.org/en/Canvas_tutorial/Basic_usage

    Read the article

  • Obfuscation is not a panacea

    - by simonc
    So, you want to obfuscate your .NET application. My question to you is: Why? What are your aims when your obfuscate your application? To protect your IP & algorithms? Prevent crackers from breaking your licensing? Your boss says you need to? To give you a warm fuzzy feeling inside? Obfuscating code correctly can be tricky, it can break your app if applied incorrectly, it can cause problems down the line. Let me be clear - there are some very good reasons why you would want to obfuscate your .NET application. However, you shouldn't be obfuscating for the sake of obfuscating. Security through Obfuscation? Once your application has been installed on a user’s computer, you no longer control it. If they do not want to pay for your application, then nothing can stop them from cracking it, even if the time cost to them is much greater than the cost of actually paying for it. Some people will not pay for software, even if it takes them a month to crack a $30 app. And once it is cracked, there is nothing stopping them from putting the result up on the internet. There should be nothing suprising about this; there is no software protection available for general-purpose computers that cannot be cracked by a sufficiently determined attacker. Only by completely controlling the entire stack – software, hardware, and the internet connection, can you have even a chance to be uncrackable. And even then, someone somewhere will still have a go, and probably succeed. Even high-end cryptoprocessors have known vulnerabilities that can be exploited by someone with a scanning electron microscope and lots of free time. So, then, why use obfuscation? Well, the primary reason is to protect your IP. What obfuscation is very good at is hiding the overall structure of your program, so that it’s very hard to figure out what exactly the code is doing at any one time, what context it is running in, and how it fits in with the rest of the application; all of which you need to do to understand how the application operates. This is completely different to cracking an application, where you simply have to find a single toggle that determines whether the application is licensed or not, and flip it without the rest of the application noticing. However, again, there are limitations. An obfuscated application still has to run in the same way, and do the same thing, as the original unobfuscated application. This means that some of the protections applied to the obfuscated assembly have to be undone at runtime, else it would not run on the CLR and do the same thing. And, again, since we don’t control the environment the application is run on, there is nothing stopping a user from undoing those protections manually, and reversing some of the obfuscation. It’s a perpetual arms race, and it always will be. We have plenty of ideas lined about new protections, and the new protections added in SA 6.6 (method parent obfuscation and a new control flow obfuscation level) are specifically designed to be harder to reverse and reconstruct the original structure. So then, by all means, obfuscate your application if you want to protect the algorithms and what the application does. That’s what SmartAssembly is designed to do. But make sure you are clear what a .NET obfuscator can and cannot protect you against, and don’t expect your obfuscated application to be uncrackable. Someone, somewhere, will crack your application if they want to and they don’t have anything better to do with their time. The best we can do is dissuade the casual crackers and make it much more difficult for the serious ones. Cross posted from Simple Talk.

    Read the article

  • How do I solve this "unexpected '}' syntax error" in my bash script?

    - by WASasquatch
    I have a piece of code that has some serious issues and I was hoping to get it solved soon but no one has offered any help. I thought I'd try some Ubuntu users since this is the OS running the script. mc_addplugin() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Please stop the service before adding a plugin." else echo "Paste the URL to the .JAR Plugin..." read JARURL JARNAME=$(basename "$JARURL") if [ -d "$TEMPPLUGINS" ] then as_user "cd $PLUGINSPATH && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" else as_user "cd $PLUGINSPATH && mkdir $TEMPPLUGINS && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" fi if [ -f "$TMPDIR/$JARNAME" ] then if [ -f "$PLUGINSPATH/$JARNAME" ] then if `diff $PLUGINSPATH/$JARNAME $TMPDIR/$JARNAME >/dev/null` then echo "You are already running the latest version of $JARNAME." else NOW=`date "+%Y-%m-%d_%Hh%M"` echo "Are you sure you want to overwrite this plugin? [Y/n]" echo "Note: Your old plugin will be moved to the "$TEMPPLUGINS" folder with todays date." select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Are you sure you want to add this new plugin? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]?" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Failed to download the plugin from the URL you specified!" exit; fi } It throws it at the closing bracket at the end of the function.

    Read the article

  • Changes in the Maven Embedded GlassFish plugin

    - by Romain Grecourt
    The plugin changed its Maven coordinates (a.k.a GAV) over time:  version <= 3.1.1 available under org.glassfish:maven-glassfish-embedded-plugin version >= 3.1.2 available under org.glassfish.embedded:maven-glassfish-embedded-plugin The goal “glassfish-embedded:run” has changed its way of reading the deployment configuration in the latest version: 4.0.Projects using previous versions of the plugin will stop working with this goal. Here is an example of the “old behavior”: 1 2 3 4 5 6 7 8 9 10 11 12 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>3.1.2.2</version> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> </plugin> The new behavior is as follow: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <plugin> <groupId>org.glassfish.embedded</groupId> <artifactId>maven-embedded-glassfish-plugin</artifactId> <version>4.0</version> <configuration> <goalPrefix>embedded-glassfish</goalPrefix> <autoDelete>true</autoDelete> <port>8080</port> </configuration> <executions> <execution> <goals> <goal>deploy</goal> </goals> <configuration> <app>target/${project.build.finalName}.war</app> <contextRoot>/</contextRoot> </configuration> </execution> </executions> </plugin> The new version looks for execution of the deploy goal and the associated configuration, when running the goal ‘run’. Both would allow you to run the latest version of the glassfish-embedded jar, you’d only need to add it as a plugin dependency: 1 2 3 4 5 6 7 8 9 10 <plugin> [...] <dependencies> <dependency> <groupId>org.glassfish.main.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>4.0</version> </dependency> </dependencies> </plugin>

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • On a BPM Mission with Process Accelerators. Part 1: BPM as an ATV

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Part 1: BPM as an ATV It’s always exciting to talk to customers that are in the middle of a BPM transformational journey. Their thirst for new processes to improve with BPM makes them explorers in a landscape of opportunities. They have discovered that with BPM the can “go places” they couldn’t reach before. In a way, learning how to generate value with BPM is like adopting a new mean of transportation. Apps are like regular cars: very efficient, but to be used on paved roads: the road/process has been traced, and there are fixed paths to follow to get from “opportunity to quote” or from “quote to cash”. Getting off the road is risky, and laying down new asphalt is slow and expensive. Custom development is like running: you can go virtually anywhere, following any path you like, yet it’s slow, and a lot of sweat. BPM allows you to go “off the beaten path” laid out by packaged apps, yet make fast progress compared to custom development. BPM is therefore more like an All-Terrain Vehicle (ATV): less efficient than a car, but much faster than running, with a powerful enough engine that can get you places. The similarities between BPM and ATVs don’t stop here: you must learn to ride it even if you already know how to drive a car; you can reach places but figuring out the path to your destination is harder. Ultimately, with BPM as with an ATV, you reach places that you thought you could never reach, and you discover new destinations that provide great benefit to you … and that you didn’t even know existed! That’s where the sense of accomplishment that we heard from our BPM customers comes from, as well as the desire to share their experience, or even, as in the case of a County, the willingness to contribute their BPM solutions to help other agencies that face the same challenges. The question we wanted to answer is how can we teach organizations to drive ATV/BPM, thus leading them to deeper success with BPM, while increasing their awareness of the potential for reaching new targets, and finally equip them with the right tools. Like with ATVs, getting from point A to point B is more of a work of art than cruising on the highway by car. There is a lot we can do: after all many sought after destinations are common: someone else has been on the same path before. If only you could learn from their experience …

    Read the article

  • No Customer Left Behind

    - by Kathryn Perry
    A guest post by David Vap, Group Vice President, Oracle Applications Product Development What does customer experience mean to you? Is it a strategy for your executives? A new buzz word and marketing term? A bunch of CRM technology with social software added on? For me, customer experience is a customer-centric worldview that produces a deeper understanding of your business and what it takes to achieve sustainable, differentiated success. It requires you to prioritize and examine the journey your customers are on with your brand, so you can answer the question, "How can we drive greater value for our business by delivering a better customer experience?" Businesses that embrace a customer-centric worldview understand their business at a much deeper level than most. They know who their customers are, what their value is, what they do, what they say, what they want, and ultimately what that means to their business. "Why Isn't Everyone Doing It?" We're all consumers who have our own experiences with many brands. Good or bad, some of those experiences stay with us. So viscerally we understand the concept of customer experience from the stories we share. One that stands out in my mind happened as I was preparing to leave for a 12-month job assignment in Europe. I wanted to put my cable television subscription on hold. I wasn't leaving for another vendor. I wasn't upset. I just had a situation where it made sense to put my $180 per month account on pause until I returned. Unfortunately, there was no way for this cable company to acknowledge that I was a loyal customer with a logical request - and to respond accordingly. So, ultimately, they lost my business. Research shows us that it costs six to seven times more to acquire a new customer than to retain an existing one. Heavily funding the efforts of getting new customers and underfunding the efforts of serving the needs of your existing (who are your greatest advocates) is a vicious and costly cycle. "Hey, These Guys Suck!" I love my Apple iPad because it's so easy to use. The explosion of these types of technologies, combined with new media channels, has raised our expectations and made us hyperaware of what's going on and what's available. In addition, social media has given us a megaphone to share experiences both positive and negative with greater impact. We are now an always-on culture that thrives on our ability to access, connect, and share anywhere anytime. If we don't get the service, product, or value we expect, it is easy to tell many people about it. We also can quickly learn where else to get what we want. Consumers have the power of influence and choice at a global scale. The businesses that understand this principle are able to leverage that power to their advantage. The ones that don't, suffer from it. Which camp are you in?Note: This is Part 1 in a three-part series. Stop back for Part 2 on November 19.

    Read the article

  • SEO/Google: How should I handle multiple countries and domains?

    - by Valorized
    Hello. I'm the webmaster of an online shop based in Austria (Europe). Therefore we registered "example.at". We also own different other domain names like "example-shop.com" and "example.info". Currently all those domains are redirected (301) to the .at one. Still available is: "example.net" and "example.org" (and .ws/.cc), unfortunately not available: .de/.eu The .com is currently owned by one of our partners, the contract ends in 2012 but until then we have no chance to get this one. Recently I read more about geo-targeting and I noticed ONE big deal. The tld ".at" is hardly recognised in Germany (google.de) whereas it is excellently listed in Austria (google.at). As a result of the .at I cannot set the target location manually (or to unlisted). More info: https://www.google.com/support/webmasters/bin/answer.py?answer=62399&hl=en This is a big problem. I looked at Google Analytics and - although Germany is 10x as big as Austria - there are more visits from Austria. So, how should I config the domain in order to get the best results in both, Germany and Austria? I thought of some solutions: First I could stop redirecting the .info. Then there would be a duplicate of the .at one. Moreover, in Webmastertools, I could set the target location of the .info to Germany. As the .at still targets Austria, both would be targeted - however I don't now if google punishes one of them because of the duplicate content? Same as 1. but with .net or .org (I think .info is not a "nice" domain and moreover I think search engines prefer .com, .net or .org to .info). Same as 1. (or 2.) but with a rel="canonical" on the new one (pointing to the .at). Con: I don't think this will improve the situation, because it still tells google that the .at one is more important, like: "if .info points to .at, the target may still be Austria". rel="canonical" on the .at pointing to the new (.info or .net or .org). However I fear that this will have a negative impact on the listing on google.at because: "Hey, the well-known .at is not important anymore, so let's focus on the .info which is not well-known." - Therefore: bad position in search results. Redirect .at to the new (.info or .net or .org) with a 301-Redirect. Con: Might be worse than 4, we might loose Page-Rank (or "the value of the page", because google says that page rank is not important anymore). Moreover this might be even more confusing for the customers. In 3. or 4. customers don't get redirected, they do not see the canonical-meta-tag. So, dear experts, please tell me what the best option would be! Thank you very much for your advice in advance and please excuse the long question. I really appreciate this network! Please note: It's exactly the same content AND language. In Austria we speak German.

    Read the article

  • tightvnc authentication failure

    - by broiyan
    When I run a tightvnc client to establish a VNC session I sometimes receive an error message that suggests there are repeated failed VNC login attempts or a brute force attack. The message dialog title is "unsupported security type" and the text content is "too many authentication failures, try another connection? yes/no". This problem goes away if I reboot the Ubuntu server and reload the VNC server program and try again. From that point, it will work for multiple VNC sessions. My VNC sessions are typically about 20 minutes. At some time in the future, the problem may recur so it seems correlated to the time the server has been up or the time tightvnc has been loaded. Typically it takes only a day or so before the problem comes back. I am using tightvnc 1.3 on an server running Ubuntu 12.04. The version of vncserver is rather dated because that seems to be all that is available from tightvnc for linux servers. On the client side I use the newest Java-based VNC client (version 2.5) for both Windows access and Ubuntu access. All my VNC sessions are via SSH. I am the only user and I will typically use only the same client computer. How can I stop this problem from recurring? Edit I found the log file. This is a small excerpt of what I am seeing. Essentially, various IPs, not my own, are attempting to connect. What is the practical solution for this? 05/06/12 20:07:32 Got connection from client 69.194.204.90 05/06/12 20:07:32 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:07:32 Too many authentication failures - client rejected 05/06/12 20:07:32 Client 69.194.204.90 gone 05/06/12 20:07:32 Statistics: 05/06/12 20:07:32 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:24:56 Got connection from client 79.161.16.40 05/06/12 20:24:56 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:24:56 Too many authentication failures - client rejected 05/06/12 20:24:56 Client 79.161.16.40 gone 05/06/12 20:24:56 Statistics: 05/06/12 20:24:56 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:29:27 Got connection from client 109.230.246.54 05/06/12 20:29:27 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:29:28 rfbVncAuthProcessResponse: authentication failed from 109.230.246.54 05/06/12 20:29:28 Client 109.230.246.54 gone 05/06/12 20:29:28 Statistics: 05/06/12 20:29:28 framebuffer updates 0, rectangles 0, bytes 0

    Read the article

  • How to run VisualSVN Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSVN Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSVN Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSVN server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSVN Server starts up without a problem. When I bind VisualSVN server to port 8443 and start Site.com, then VisualSVN Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSVN Server to run on port 443 with IIS running on the same server? Edit / Answer The answer provided by Ivan did point me in the right direction. For anyone else running into this, here is a bit more information. Even though my IIS had no bindings set to the IP address I am using for VisualSvn, IIS will still take the IP address hostage unless IIS is explicitly told which IP addresses to listen to. There is no GUI in Win Server 2k8 to configure the IP addresses for IIS to listen; by default, IIS listens to all IP addresses assigned to the server. The following will help configure IIS to only listen to the IP addresses you want: open a command prompt enter: netsh enter: http enter: show iplisten -- this will show a table of the IP addresses IIS is listening to. By default, the table will be empty (I guess this means IIS listens to all IP's) For each IP address IIS should listen to, enter: add iplisten ipaddress=x.x.x.x enter: show iplisten -- you should now see all the IP addresses added to the listening table. Exit and then reset IIS. Each of these commands can also be run directly, i.e., netsh http show iplisten If you need to delete an IP address from the listening table: open a command prompt enter: netsh enter: http enter: delete iplisten ipaddress=x.x.x.x Exit and then reset IIS.

    Read the article

  • Power cycles on/off 3 times before booting properly from cold start, no other issues (New System)

    - by James
    Relevant Specs: Sapphire 5850, core i7 920, Seasonic x750 power supply, ECS X58B-A2 mobo. From a cold boot, meaning all power totally disconnected at the wall, the system will power on for less than a second and then power off completely. After two seconds of being powered off this will repeat and on the third "attempt" the computer will boot. To be very specific here is what happens: The power is turned on at the wall and on the psu, the orange stdby LED on the mobo is illuminated but the system is 'off'. I hit the power button on the case or on the mobo itself I hear the relay (?) in the psu closing The case light comes on and the mobo power light comes on. The fans start rotating. Immediately after this the I hear some relay click - the power lights extinguish, the fans stop, the stdby light remains on. Less than 2 seconds pass and the cycle repeats without any intervention from me. On the third attempt it boots normally and the machine runs perfectly. If I do a soft reboot or a full shutdown the computer starts normally the next time. It's only if I pull the power cord or flick the switch off on the PSU that I get the cycling again. Basically any time the stdby light on the mobo goes out. I have removed the graphics card and I get the same problem. I have removed the PSU, hotwired it to the ON position and verified voltages on all lines. The relay does not cycle when I do this. If I connect only the 24 pin ATX connector to the mobo and not the 8 pin ATX12V / CPU connector then I will not get the cycling, the fans run, the power light stays on, but obviously the system can't boot. Disconnecting all fans has no effect on the problem. My feeling it that it's something to do with the motherboard like a capacitor that's taking a long time to charge because it's leaking or something along those lines. But I can't imagine what could be 'wrong' with it and only manifest itself as a problem under these very specific circumstances. Any ideas? Thanks.

    Read the article

  • OSS4 on Debian Squeeze

    - by mit
    Hi, I am trying to get OSS4 to work on a Debian Squeeze 64 bit machine with an usb sound adapter. There is no sound from this adapter at the present, although it worked just before on the previous installation. You can see the output of some commands here: $ sudo /etc/init.d/oss4-base restart Stopping Open Sound System: SNDCTL_MIX_EXTINFO: No such device or address done. Starting Open Sound System: done (OSS is already loaded). $ sudo /etc/init.d/oss4-base stop Stopping Open Sound System: SNDCTL_MIX_EXTINFO: No such device or address done. $ sudo /etc/init.d/oss4-base start Starting Open Sound System: done (OSS is already loaded). $ ossinfo Version info: OSS 4.2 (b 2002/201001250441) (0x00040100) GPL Platform: Linux/x86_64 2.6.32-5-xen-amd64 #1 SMP Fri Dec 10 17:41:50 UTC 2010 (pc11) Number of audio devices: 2 Number of audio engines: 2 Number of MIDI devices: 0 Number of mixer devices: 2 Device objects 0: osscore0 OSS core services 1: oss_hdaudio0 ATI HD Audio interrupts=0 (613) HD Audio controller ATI HD Audio Vendor ID 0x10024383 Subvendor ID 0x10192816 Codec 0: Not present 2: oss_usb0 USB audio core services 3: usb0d8c0126-0 USB sound device 4: usb0d8c0126-1 USB sound device 5: usb0d8c0126-2 USB sound device 6: usb0d8c0126-3 USB sound device MIDI devices (/dev/midi*) Mixer devices 0: (USB sound device )(Mixer 0 of device object 3) 1: USB sound device (Mixer 0 of device object 5) Audio devices (USB sound device play /dev/oss/usb0d8c0126-1/pcm0 ) (device index 0) USB sound device play /dev/oss/usb0d8c0126-3/pcm0 (device index 1) Nodes /dev/dsp -> /dev/oss/usb0d8c0126-1/pcm0 /dev/dsp_out -> /dev/oss/usb0d8c0126-1/pcm0 /dev/dsp_mmap -> /dev/oss/usb0d8c0126-1/pcm0 $ osstest Sound subsystem and version: OSS 4.2 (b 2002/201001250441) (0x00040100) Platform: Linux/x86_64 2.6.32-5-xen-amd64 #1 SMP Fri Dec 10 17:41:50 UTC 2010 *** Scanning sound adapter #-1 *** /dev/oss/usb0d8c0126-1/pcm0 (audio engine 0): USB sound device play - Device not present - Skipping *** Scanning sound adapter #-1 *** /dev/oss/usb0d8c0126-3/pcm0 (audio engine 1): USB sound device play - Performing audio playback test... /dev/oss/usb0d8c0126-3/pcm0: No such file or directory Can't open the device *** Some errors were detected during the tests *** $ ossxmix /dev/oss/usb0d8c0126-2/mix0: No such file or directory No mixers could be opened $ ossmix SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossmix -a SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossmix -D SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ ossmix -D 0 SNDCTL_MIX_EXTINFO: No such device or address ad@pc11:~$ man ossmix ad@pc11:~$ ossxmix /dev/oss/usb0d8c0126-2/mix0: No such file or directory No mixers could be opened How can I make oss sound work? I can add more information if necessary.

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • Migrate from MySQL to PostgreSQL on Linux (Kubuntu)

    - by Dave Jarvis
    Storyline Trying to migrate a database from MySQL to PostgreSQL. All the documentation I have read covers, in great detail, how to migrate the structure. I have found very little documentation on migrating the data. The schema has 13 tables (which have been migrated successfully) and 9 GB of data. MySQL version: 5.1.x PostgreSQL version: 8.4.x I want to use the R programming language to analyze the data using SQL select statements; PostgreSQL has PL/R, but MySQL has nothing (as far as I can tell). A long time ago in a galaxy far, far away... Create the database location (/var has insufficient space; also dislike having the PostgreSQL version number everywhere -- upgrading would break scripts!): sudo mkdir -p /home/postgres/main sudo cp -Rp /var/lib/postgresql/8.4/main /home/postgres sudo chown -R postgres.postgres /home/postgres sudo chmod -R 700 /home/postgres sudo usermod -d /home/postgres/ postgres All good to here. Next, restart the server and configure the database using these installation instructions: sudo apt-get install postgresql pgadmin3 sudo /etc/init.d/postgresql-8.4 stop sudo vi /etc/postgresql/8.4/main/postgresql.conf Change data_directory to /home/postgres/main sudo /etc/init.d/postgresql-8.4 start sudo -u postgres psql postgres \password postgres sudo -u postgres createdb climate pgadmin3 Use pgadmin3 to configure the database and create a schema. A New Hope The episode began in a remote shell known as bash, with both databases running, and the installation of a command with a most unusual logo: SQL Fairy. perl Makefile.PL sudo make install sudo apt-get install perl-doc (strangely, it is not called perldoc) perldoc SQL::Translator::Manual Extract a PostgreSQL-friendly DDL and all the MySQL data: sqlt -f DBI --dsn dbi:mysql:climate --db-user user --db-password password -t PostgreSQL > climate-pg-ddl.sql mysqldump --skip-add-locks --complete-insert --no-create-db --no-create-info --quick --result-file="climate-my.sql" --databases climate --skip-comments -u root -p The Database Strikes Back Recreate the structure in PostgreSQL as follows: pgadmin3 (switch to it) Click the Execute arbitrary SQL queries icon Open climate-pg-ddl.sql Search for TABLE " replace with TABLE climate." (insert the schema name climate) Search for on " replace with on climate." (insert the schema name climate) Press F5 to execute This results in: Query returned successfully with no result in 122 ms. Replies of the Jedi At this point I am stumped. Where do I go from here (what are the steps) to convert climate-my.sql to climate-pg.sql so that they can be executed against PostgreSQL? How to I make sure the indexes are copied over correctly (to maintain referential integrity; I don't have constraints at the moment to ease the transition)? How do I ensure that adding new rows in PostgreSQL will start enumerating from the index of the last row inserted (and not conflict with an existing primary key from the sequence)? Resources A fair bit of information was needed to get this far: https://help.ubuntu.com/community/PostgreSQL http://articles.sitepoint.com/article/site-mysql-postgresql-1 http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL http://pgfoundry.org/frs/shownotes.php?release_id=810 http://sqlfairy.sourceforge.net/ Thank you!

    Read the article

  • Windows XP SP3 - Library not registered error... IE8 not installing.

    - by Wesley
    Specs to put things in context: AMD Athlon XP 2400+ @ 1987 MHz / 2 x 512MB PC3200 DDR RAM / WD 160GB IDE HDD / 3DFuzion 128MB GeForce 6200 AGP 4x / FIC AM37 / Windows XP SP3 Just recently, I was unable to start Windows Media Player. I clicked it and the busy cursor came up, but then nothing happened. Also, I tried doing a search for a file, and same thing. It would show busy cursor then suddenly stop doing anything. I couldn't find it in the Processes of the Task Manager. (Perhaps I don't know what I'm looking for?) Also, I was trying to update my DirectX, which has been running something older than DX9 9.0c for a while now, except the installation fails due to "internal error". I think the failed DirectX installation has been like that for a while... (I remember trying to install DX9 9.0c a while back, but still failed.) The Windows programs not starting, I don't think I've ever had before... what could be the cause of these problems?! Thanks in advance. =) EDIT1: Weird thing now is that when I try to open User Accounts, I get a message saying "Wrong number of arguments or invalid property assignment." Also, when I'm trying to open services.msc, I'm getting a script error that says that "Library not registered." (Code: 0, URL: res://C:\WINDOWS\System32\mmcndmgr.dll/views.htm) Perhaps this is related to my other question, where I seemed to have an unregistered library of some sort. EDIT2: The DirectX End-User Runtime Web Installer freakishly worked and updated successfully. Now, I have focussed the question on the bigger problem. For WMP11, H&S, and Search, I click it once, get a busy icon for a second, and then nothing happens. Refer to EDIT1 for other problems. EDIT3: Seems that my problems may be related to some Internet Explorer Script Errors. So what I did was download the IE8 installer from the Microsoft website, but when I run it and get to the main portion of the installer, it just keeps looping on the Downloading step of the installation. The installer is still running, but I left it for at least 4 hours and the downloading step was still not finished. What is the problem? Also, I uninstalled Ubuntu 9.10 after these problems, but they still remain. EDIT4: I'm getting an active desktop recovery background on startup now. And within seconds my computer hangs again. EDIT3 explains main issue though.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >