Search Results

Search found 8450 results on 338 pages for 'mail gateway'.

Page 307/338 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • [EF + ORACLE] Updating and Deleting Entities

    - by JTorrecilla
    Prologue In previous chapters we have seen how to insert data through EF, with and without sequences. In this one, we are going to see how to Update and delete Data from the DB. Updating data The update of the Entity Data (properties) is a very common and easy action. Before of change any of the properties of the Entity, we can check the EntityState property, and we can see that is EntityState.Unchanged.   For making an update it is needed to get the Entity which will be modified. In the following example, I use the GetEmployeeByNumber to get a valid Entity: 1: EMPLEADOS emp=GetEmployeeByNumber(2); 2: emp.Name="a"; 3: emp.Phone="2"; 4: emp.Mail="aa"; After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. 1: context.SaveChanges(); After modifying the desired properties of the Entity, we are going to check again Entitystate property, which now has the EntityState.Modified value. To persist the changes to the DB is necessary to invoke the SaveChanges function of our context. If we check again the EntityState property we will see that the value will be EntityState.Unchanged.   Deleting Data Another easy action is to delete an Entity.   The first step to delete an Entity from the DB is to select the entity: 1: CLIENTS selectedClient = GetClientByNumber(15); 2: context.CLIENTES.DeleteObject(clienteSeleccionado); Before invoking the DeleteObject function, we will check EntityStet which value must be EntityState.Unchanged. After deleting the object, the state will be changed to EntitySate.Deleted. To commit the action we have to invoke the SaveChanges function. Aftar that, the EntityState property will be EntityState.Detached. Cascade Entity Framework lets cascade updates and deletes, although I never see cascade updates. What is a cascade delete? A cascade delete is an action that allows to delete all the related object to the object we desire to delete. This option could be established in the DB manager, or it could be in the EF model designer. For example: With a given relation (1-N) between clients and requests. The common situation must be to let delete those clients whose have no requests. If we select the relation between both entities, and press the second mouse button, we can see the properties panel of the relation. The props are: This grid shows the relations indicating the Master table(Clients) and the end point (Cabecera or Requests) The property “End 1 OnDelete” indicates the action to do when a Entity from the Master will be deleted. There are two options: - None: No action will be done, it is said, if a Entity has details entities it could not be deleted. - Cascade: It will delete all related entities to the master Entity. If we enable the cascade delete in a relation, and we invoke the DeleteObject function of the set, we could observe that all the related object indicates a Entitystate.Deleted state. Like an update, insert or common delete, until we commit the changes with SaveChanges function, the data would not be commited. Si habilitamos el borrado en cascada de una relación, e invocamos a la función DeleteObject del conjunto, podremos observar que todas las entidades de Detalle (de la relación indicada) presentan el valor EntityState.Deleted en la propiedad EntityState. Del mismo modo que en el borrado, inserción o actualización, hasta que no se invoque al método SaveChanges, los cambios no van a ser confirmados en la Base de Datos. Finally In this chapter we have seen how to update a Entity, how to delete an Entity and how to implement Cascade Deleting through EF. In next chapters we will see how to query the DB data.

    Read the article

  • Customer owes me half my payment. Should I take ownership of his AWS account for charging? How?

    - by Cawas
    Background They paid me my first half (back in April 15th) before even we could get into an agreement. Very nice of him! Then I've finished the 2 weeks job of setting up the servers, using his AWS credentials he had just bought. I waited for another 2 weeks for everything settling up, and it was all running fine. He did what he needed with his sftp account, everyone were happy. Now, it has been almost 2 months since I've finished the job and I still didn't get the 2nd half. I must assume, it's not much money (about U$400, converted), but it would help me pay the bills at least. Heck, the Amazon bills they are paying are little less than that (for now). Measures I'm wondering how I can go to charge him now. First thought, of course, would be taking everything down and say "pay now, or be doomed". If that's not good enough, then I lost it. I have no contracts and I doubt I could get a law suit in this country for such a low value based only on emails. And I don't really want to get too agressive here - there might be a business chance in the future and I don't want to ruin it. Second though would be just changing the password. But then he probably could gain access again by some recovery means. That's where my question may mainly relay. How can I do it and not leaving any room for recovery from his side? I even got the first AWS "your account was created" mail from himself, showing me I could begin my job, back then. Lastly, do you have any other idea on what I can and what I should do in this case? Responding to Answers Please, consider reading the current answers and comments. This is not a very simple case. I've considered many, many options (including all lawful ones) before considering this ones I've listed here, and I am willing to take the loss and all that. That's not the point. The point is being practical here. I will call him again and talk about it. I will do terrorism on getting lawyers and getting contract. I am ready to go all forth while I have time and energy for it. But, in practice, there is this extra thing I can do to assure myself of the work I've done. I can basically take it back and delete everything! I'd only take his password because I can find no other way to do it within Amazon. Maybe, contacting Amazon and explaining the situation? I don't know. Give me ideas on this technical side! And thank everyone for the attention and helping me clarifying the issue so far! :)

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • At what point does "constructive" criticism of your code become unhelpful?

    - by user15859
    I recently started as a junior developer. As well as being one of the least experienced people on the team, I'm also a woman, which comes with all sorts of its own challenges working in a male-dominated environment. I've been having problems lately because I feel like I am getting too much unwarranted pedantic criticism on my work. Let me give you an example of what happened recently. Team lead was too busy to push in some branches I made, so he didn't get to them until the weekend. I checked my mail, not really meaning to do any work, and found that my two branches had been rejected on the basis of variable names, making error messages more descriptive, and moving some values to the config file. I don't feel that rejecting my branch on this basis is useful. Lots of people were working over the weekend, and I had never said that I would be working. Effectively, some people were probably blocked because I didn't have time to make the changes and resubmit. We are working on a project that is very time-sensitive, and it seems to me that it's not helpful to outright reject code based on things that are transparent to the client. I may be wrong, but it seems like these kinds of things should be handled in patch type commits when I have time. Now, I can see that in some environments, this would be the norm. However, the criticism doesn't seem equally distributed, which is what leads to my next problem. The basis of most of these problems was due to the fact that I was in a codebase that someone else had written and was trying to be minimally invasive. I was mimicking the variable names used elsewhere in the file. When I stated this, I was bluntly told, "Don't mimic others, just do what's right." This is perhaps the least useful thing I could have been told. If the code that is already checked in is unacceptable, how am I supposed to tell what is right and what is wrong? If the basis of the confusion was coming from the underlying code, I don't think it's my responsibility to spend hours refactoring a whole file that someone else wrote (and works perfectly well), potentially introducing new bugs etc. I'm feeling really singled out and frustrated in this situation. I've gotten a lot better about following the standards that are expected, and I feel frustrated that, for example, when I refactor a piece of code to ADD error checking that was previously missing, I'm only told that I didn't make the errors verbose enough (and the branch was rejected on this basis). What if I had never added it to begin with? How did it get into the code to begin with if it was so wrong? This is why I feel so singled out: I constantly run into this existing problematic code, that I either mimic or refactor. When I mimic it, it's "wrong", and if I refactor it, I'm chided for not doing enough (and if I go all the way, introducing bugs, etc). Again, if this is such a problem, I don't understand how any code gets into the codebase, and why it becomes my responsibility when it was written by someone else, who apparently didn't have their code reviewed. Anyway, how do I deal with this? Please remember that I said at the top that I'm a woman, and I'm sure these guys don't usually have to worry about decorum when they're reviewing other guys' code, but honestly that doesn't work for me, and it's causing me to be less productive. I'm worried that if I talk to my manager about it, he'll think I can't handled the environment, etc.

    Read the article

  • UK OUG Conference Highlights and Insights

    - by Richard Bingham
    As per my preemptive post, this was the first time the annual conference organized by the UK Oracle User Group (UKOUG) was split into two events, one for Oracle Applications and another in December for Oracle Technology. Apps13, as it was branded, was hailed as a success, with over 1000 registered attendees and three days of sessions, exhibition, round-tables and many other types of content. As this poster on their stand illustrates, the UKOUG is a strong community with popular participants from both big and small Oracle partners and customers. The venue was a more intimate setting than previous years also, allowing everyone to casually bump into those they hoped to. It gave a real feeling of an Apps Community. The main themes over the days where CRM and Customer Experience, HCM, and FIN/SCM. This allowed people to attend just one focused day if they wanted. In addition the Apps Transformation stream ran across all three days, offering insights, advice, and details on the newer product solutions like Fusion Applications.  Here are some of the key take-aways I got from the conference, specific to my role in Fusion Applications Developer Relations: User Experience continues to be a significant reason for adopting some of the newer application products available, with immediately obvious gains in user productivity and satisfaction reported by customers. Also this doesn't stop with the baked-in UX either, with their Design Patterns proving popular and indeed currently being extended to including things like extending on ADF mobile and customizing the Simplified UI. More on this to come from us soon. The executive sessions emphasized the "it's a journey" phrase, illustrating that modern business applications are powered by technologies such as Cloud, Mobile, Social and Big Data and these can be harnessed to help propel your organization forward. Indeed the emphasis is away from the traditional vendor prescribed linear applications road map, and towards plotting a course based on business priorities supported by a broad range of integrated solutions. To help with this several conference sessions demoed the new "Applications Navigator" tool, developed in partnership with OUG members, which offers a visual framework to help organizations plan their Oracle Applications investments around business and technology imperatives. Initial reaction was positive, especially as customers do not need to decipher Oracle's huge product catalog and embeds the best blend of proven and integrated applications solutions. We'll share more on this when it is generally available. Several sessions focused around explanations and interpretation of Oracle OpenWorld 2013, helping highlight the key Oracle Applications messages and directions. With a relative small percentage of conference attendees also at OpenWorld (from a show of hands) this was a popular way to distill the information available down into specific items of interest for the community. Please note the original OpenWorld 2013 content is still available for download but will not remain available forever (via the Oracle website OpenWorld Content Catalog > pick a session > see the PDF download). With the release of E-Business Suite 12.2 the move to develop and deploy on the Fusion Middleware stack becomes a reality for many Oracle Applications customers. This coupled with recent E-Business Suite features such as the Integrated SOA Gateway and the E-Business Suite SDK for Java, illustrates how the gap between the technologies and techniques involved in extending E-Business Suite and Fusion Applications is quickly narrowing. We'll see this merging continue to evolve going forwards. Getting started with Oracle Cloud Applications is actually easier than many customers expected, with a broad selection of both large and medium sized organizations explaining how they added new features to their existing Oracle Applications portfolios. New functionality available from Fusion HCM and CX are popular extensions that do not have to disrupt those core business services. Coexistence is the buzzword here, and the available integration is also simpler than many expected, commonly involving an initial setup data load, then regularly incremental synchronizations, often without a need for real-time constant communication between systems. With much of this pre-built already the implementation process is also quite rapid. With most people dressed in suits, we wanted to get the conversations going without the traditional english reserve, so we decided to make ourselves a bit more obvious, as the photo below shows. This seemed to be quite successful and helped those interested identify and approach us. Keep a look out for similar again. In fact if you're in the UK there is an "Apps Transformation Day" planned by the UKOUG for the 19th March 2014, with more details to follow. Again something we'll be sure to participate in. I am hoping to attend the next half of the UKOUG annual conference, Tech13, that focuses more on Oracle technology and where there is more likely to be larger attendance of those interested in the lower-level aspects of applications customization and development. If you're going, let me know and maybe we can meet up.

    Read the article

  • Process for Securing Web Sites and Applications

    - by Aamir Hasan
    The following quick-start guide provides a detailed overview of how to configure security for IIS 6.0. Reduce the Attack Surface of the Web Server 1.       Enable only essential Windows Server 2003 components and services. 2.       Enable only essential IIS 6.0 components and services. 3.       Enable only essential Web service extensions. 4.       Enable only essential Multipurpose Internet Mail Extensions (MIME) types. 5.       Configure Windows Server 2003 security settings. Prevent Unauthorized Access to Web Sites and Applications 1.       Store content on a dedicated disk volume. 2.       Set IIS Web site permissions. 3.       Set IP address and domain name restrictions. 4.       Set the NTFS file system permissions. Isolate Web Sites and Applications 1.       Evaluate the effects of impersonation on application compatibility: 2·         Identify the impersonation behavior for ASP applications. 3·         Select the impersonation behavior for ASP.NET applications. 4.       Configure Web sites and applications for isolation. Configure User Authentication 1.       Configure Web site authentication. 2·         Select the Web site authentication method. 3·         Configure the Web site authentication method. 4.       Configure File Transfer Protocol (FTP) site authentication. Encrypt Confidential Data Exchanged with Clients 1.       Use Secure Sockets Layer (SSL) to encrypt confidential data. 2.       Use Internet Protocol security (IPSec) or virtual private network (VPN) with remote administration. Maintain Web Site and Application Security 1.       Obtain and apply current security patches. 2.       Enable Windows Server 2003 security logs. 3.       Enable file access auditing for Web site content. 4.       Configure IIS logs. 5.       Review security policies, processes, and procedures.  Note:To secure the Web sites and applications in a Web farm, use the process described in this chapter to configure security for each server in the Web farm. Link:http://www.studentacad.com/post/2010/04/28/Process-for-Securing-Web-Sites-and-Applications.aspx

    Read the article

  • Navigation in a #WP7 application with MVVM Light

    - by Laurent Bugnion
    In MVVM applications, it can be a bit of a challenge to send instructions to the view (for example a page) from a viewmodel. Thankfully, we have good tools at our disposal to help with that. In his excellent series “MVVM Light Toolkit soup to nuts”, Jesse Liberty proposes one approach using the MVVM Light messaging infrastructure. While this works fine, I would like to show here another approach using what I call a “view service”, i.e. an abstracted service that is invoked from the viewmodel, and implemented on the view. Multiple kinds of view services In fact, I use view services quite often, and even started standardizing them for the Windows Phone 7 applications I work on. If there is interest, I will be happy to show other such view services, for example Animation services, responsible to start/stop animations on the view. Dialog service, in charge of displaying messages to the user and gathering feedback. Navigation service, in charge of navigating to a given page directly from the viewmodel. In this article, I will concentrate on the navigation service. The INavigationService interface In most WP7 apps, the navigation service is used in quite a straightforward way. We want to: Navigate to a given URI. Go back. Be notified when a navigation is taking place, and be able to cancel. The INavigationService interface is quite simple indeed: public interface INavigationService { event NavigatingCancelEventHandler Navigating; void NavigateTo(Uri pageUri); void GoBack(); } Obviously, this interface can be extended if necessary, but in most of the apps I worked on, I found that this covers my needs. The NavigationService class It is possible to nicely pack the navigation service into its own class. To do this, we need to remember that all the PhoneApplicationPage instances use the same instance of the navigation service, exposed through their NavigationService property. In fact, in a WP7 application, it is the main frame (RootFrame, of type PhoneApplicationFrame) that is responsible for this task. So, our implementation of the NavigationService class can leverage this. First the class will grab the PhoneApplicationFrame and store a reference to it. Also, it registers a handler for the Navigating event, and forwards the event to the listening viewmodels (if any). Then, the NavigateTo and the GoBack methods are implemented. They are quite simple, because they are in fact just a gateway to the PhoneApplicationFrame. The whole class is as follows: public class NavigationService : INavigationService { private PhoneApplicationFrame _mainFrame; public event NavigatingCancelEventHandler Navigating; public void NavigateTo(Uri pageUri) { if (EnsureMainFrame()) { _mainFrame.Navigate(pageUri); } } public void GoBack() { if (EnsureMainFrame() && _mainFrame.CanGoBack) { _mainFrame.GoBack(); } } private bool EnsureMainFrame() { if (_mainFrame != null) { return true; } _mainFrame = Application.Current.RootVisual as PhoneApplicationFrame; if (_mainFrame != null) { // Could be null if the app runs inside a design tool _mainFrame.Navigating += (s, e) => { if (Navigating != null) { Navigating(s, e); } }; return true; } return false; } } Exposing URIs I find that it is a good practice to expose each page’s URI as a constant. In MVVM Light applications, a good place to do that is the ViewModelLocator, which already acts like a central point of setup for the views and their viewmodels. Note that in some cases, it is necessary to expose the URL as a string, for instance when a query string needs to be passed to the view. So for example we could have: public static readonly Uri MainPageUri = new Uri("/MainPage.xaml", UriKind.Relative); public const string AnotherPageUrl = "/AnotherPage.xaml?param1={0}&param2={1}"; Creating and using the NavigationService Normally, we only need one instance of the NavigationService class. In cases where you use an IOC container, it is easy to simply register a singleton instance. For example, I am using a modified version of a super simple IOC container, and so I can register the navigation service as follows: SimpleIoc.Register<INavigationService, NavigationService>(); Then, it can be resolved where needed with: SimpleIoc.Resolve<INavigationService>(); Or (more frequently), I simply declare a parameter on the viewmodel constructor of type INavigationService and let the IOC container do its magic and inject the instance of the NavigationService when the viewmodel is created. On supported platforms (for example Silverlight 4), it is also possible to use MEF. Or, of course, we can simply instantiate the NavigationService in the ViewModelLocator, and pass this instance as a parameter of the viewmodels’ constructor, injected as a property, etc… Once the instance has been passed to the viewmodel, it can be used, for example with: NavigationService.NavigateTo(ViewModelLocator.ComparisonPageUri); Testing Thanks to the INavigationService interface, navigation can be mocked and tested when the viewmodel is put under unit test. Simply implement and inject a mock class, and assert that the methods are called as they should by the viewmodel. Conclusion As usual, there are multiple ways to code a solution answering your needs. I find that view services are a really neat way to delegate view-specific responsibilities such as animation, dialogs and of course navigation to other classes through an abstracted interface. In some cases, such as the NavigationService class exposed here, it is even possible to standardize the implementation and pack it in a class library for reuse. I hope that this sample is useful! Happy coding. Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Multichannel Digital Engagement: Find Out How Your Organization Measures Up

    - by Michael Snow
    This article was originally published in the September 2013 Edition of the Oracle Information InDepth Newsletter ORACLE WEBCENTER EDITION Thanks to mobile and social technologies, interactive online experiences are now commonplace. Not only that, they give consumers more choices, influence, and control than ever before. So how can you make your organization stand out? The key building blocks for delivering exceptional cross-channel digital experiences are outlined below. Also, a new assessment tool is available to help you measure your organization's ability to deliver such experiences. A clearly defined digital strategy. The customer journey is growing increasingly complex, encompassing multiple touchpoints and channels. It used to be easy to map marketing efforts to specific offline channels; for example, a direct mail piece with an offer to visit a store for a discounted purchase. Now it is more difficult to cultivate and track such clear cause-and-effect relationships. To deliver an integrated digital experience in this more complex world, organizations need a clearly defined and comprehensive digital marketing strategy that is backed up by an integrated set of software, middleware, and hardware solutions. Strong support for business agility and speed-to-market. As both IT and marketing executives know, speed-to-market and business agility are key to competitive advantage. That means marketers need solutions to support the rapid implementation of online marketing initiatives—plus the flexibility to adapt quickly to a changing marketplace. And IT needs tools with the performance, scalability, and ease of integration to support marketing efforts. Both teams benefit when business users are empowered to implement marketing initiatives on their own, with minimal IT intervention. The ability to deliver relevant, personalized content. Delivering a one-size-fits-all online customer experience is no longer acceptable. Customers expect you to know who they are, including their preferences and past relationship with your brand. That means delivering the most relevant content from the moment a visitor enters your site. To make that happen, you need a powerful rules engine so that marketers and business users can easily define site visitor segments and deliver content accordingly. That includes both implicit targeting that is based on the user’s behavior, and explicit targeting that takes a user’s profile information into account. Ideally, the rules engine can also intelligently weight recommendations when multiple segments apply to a specific customer. Support for social interactivity. With the advent of Facebook and LinkedIn, visitors expect to participate in and contribute to your web presence—and share their experience on their own social networks. That requires easy incorporation of user-generated content such as comments, ratings, reviews, polls, and blogs; seamless integration with third-party social networking sites; and support for social login, which helps to remove barriers to social participation. The ability to deliver connected, multichannel experiences that include powerful, flexible mobile capabilities. By 2015, mobile usage is projected to surpass that of PCs and other wired devices. In other words, mobile is an essential element in delivering exceptional online customer experiences. This requires the creation and management of mobile experiences that are optimized for delivery to the thousands of different devices that are in use today. Just as important, organizations must be able to easily extend their traditional web presence to the mobile channel and deliver highly personalized and relevant multichannel marketing initiatives while also managing to minimize the time and effort required to manage mobile sites. Are you curious to know how your organization measures up when it comes to delivering an engaging, multichannel digital experience? If so, take this brief, 15-question online assessment and see how your organization scores in the areas of digital strategy, digital agility, relevance and personalization, social interactivity, and multichannel experience.

    Read the article

  • Hosted Monitoring

    - by Grant Fritchey
    The concept of using services to take the place of writing a lot of your own code goes way, way back in computing history. The fundamentals of the concept go back to the dawn of computing with places like IBM hosting time-shares for computing power that you could rent for short periods of time. But things really took off with the building of the Web. Now, all the growth with virtual machines, hosted machines, hosted services from vendors like Amazon and Microsoft, the need to keep all of your software locally on physical boxes is just going the way of the dodo. There will likely always be some pieces of software that you keep on machines on your property or on your person, but the concept of keeping fundamental services locally is going away. As someone put it to me once, if you were starting a business right now, would you bother setting up an Exchange server to manage your email or would you just go to one of the external mail services for everything? For most of us (who are not Exchange admins) the answer is pretty easy. With all this momentum to having external services manage more and more of the infrastructure that’s not business unique, why would you burn up a server and license instance setting up monitoring for your SQL Servers? Of course, some of you are dealing with hyper-sensitive data that might require, through law or treaty, that you lock it down and never expose it to the intertubes, but most of us are not. So, what if someone else took on the basic hassle of setting up monitoring on your systems? That’s what we’re working on here at Red Gate. Right now it’s a private test, but we’re growing it and developing it and it’ll be going to a public beta, probably (hopefully) this year. I’m running it on my machines right now. The concept is pretty simple. You put a relay on your server, poke a hole in your firewall for it, and we start monitoring your server using SQL Monitor. It’s actually shocking how easy it is to get going. You still have to adjust your alerting thresholds, but that’s a standard part of alerting. Your pain threshold and my pain threshold for any given alert may be different. But from there, we do all the heavy lifting, keeping your data online and available, providing you with access to the information about how your servers are behaving, everything. Maybe it’s just me, but I’m really excited by this. I think we’re getting to a place where we can really help the small and medium sized businesses get a monitoring solution in place, quickly and easily. All you crazy busy, and possibly accidental, DBAs and system admins finally can set up monitoring without taking all the time to configure systems, run installs, and all the rest. You just have to tweak your alerts and you’re ready to run. If you are interested in checking it out, you can apply for the closed beta through the Monitor web page.

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • Rails3 server and bundler error: uninitialized constant Bundler (NameError)

    - by .yandex.rurap-kasta
    I just install rails 3 and all gems that it need, but when I try to start server, it says about problem in boot script. [rap-kasta@acerAspire testR3]$ script/rails server /home/rap-kasta/tmp/testR3/config/boot.rb:7:in `rescue in <top (required)>': uninitialized constant Bundler (NameError) from /home/rap-kasta/tmp/testR3/config/boot.rb:2:in `<top (required)>' from script/rails:9:in `require' from script/rails:9:in `<main> So, I tried to reinstall Bundler, install "pre"-version (but really it has version number lower then i install by gem install bundler Now there are next gems in system: abstract (1.0.0) actionmailer (3.0.0.beta, 2.3.5, 2.3.4) actionpack (3.0.0.beta, 2.3.5, 2.3.4) activemodel (3.0.0.beta) activerecord (3.0.0.beta, 2.3.5, 2.3.4) activeresource (3.0.0.beta, 2.3.5, 2.3.4) activesupport (3.0.0.beta, 2.3.5, 2.3.4) arel (0.2.1, 0.2.pre) builder (2.1.2) bundler (0.9.5) erubis (2.6.5) fxri (0.3.7) fxruby (1.6.20) i18n (0.3.3) jemini (2010.1.24, 2010.1.5) mail (2.1.2) memcache-client (1.7.8) mime-types (1.16) mysql (2.8.1) nifty-generators (0.3.2, 0.3.0) rack (1.1.0, 1.0.1, 1.0.0) rack-mount (0.5.1, 0.4.0) rack-openid (0.2.3, 0.2.2) rack-test (0.5.3) rails (3.0.0.beta, 2.3.5, 2.3.4) railties (3.0.0.beta) rake (0.8.7) rawr (1.3.8) RedCloth (4.2.2) ruby-mysql (3.0.2) ruby-openid (2.1.7) rubygems-update (1.3.5) rubyzip (0.9.4, 0.9.1) rubyzip2 (2.0.1) sqlite3-ruby (1.2.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.2, 0.13.1) tzinfo (0.3.16) Also, there is same error with rails console and similar with bundle check: [rap-kasta@acerAspire testR3]$ bundle check /usr/lib/ruby/gems/1.9.1/gems/bundler-0.9.5/bin/bundle:12:in `rescue in <top (required)>': uninitialized constant Bundler::BundlerError (NameError) from /usr/lib/ruby/gems/1.9.1/gems/bundler-0.9.5/bin/bundle:10:in `<top (required)>' from /usr/bin/bundle:19:in `load' from /usr/bin/bundle:19:in `<main>'

    Read the article

  • Reporting services 2008: ReportExecution2005.asmx does not exist

    - by Shimrod
    Hi everyone, I'm trying to generate a report directly from the code (to send it by mail after). I make this in a windows service. So here is what I'm doing: Dim rview As New ReportViewer() Dim reportServerAddress As String = "http://server/Reports_client" rview.ServerReport.ReportServerUrl = New Uri(reportServerAddress) Dim paramList As New List(Of Microsoft.Reporting.WinForms.ReportParameter) paramList.Add(New Microsoft.Reporting.WinForms.ReportParameter("param1", t.Value)) paramList.Add(New Microsoft.Reporting.WinForms.ReportParameter("CurrentDate", Date.Now)) Dim reportsDirectory As String = "AppName.Reports" Dim reportPath As String = String.Format("/{0}/{1}", reportsDirectory, reportName) rview.ServerReport.ReportPath = reportPath rview.ServerReport.SetParameters(paramList) 'This is where I get the exception Dim mimeType, encoding, extension, deviceInfo As String Dim streamids As String() Dim warnings As Microsoft.Reporting.WinForms.Warning() deviceInfo = "<DeviceInfo><SimplePageHeaders>True</SimplePageHeaders></DeviceInfo>" Dim format As String = "PDF" Dim bytes As Byte() = rview.ServerReport.Render(format, deviceInfo, mimeType, encoding, extension, streamids, warnings) When debugging this code, I can see it throws a MissingEndpointException where I make the SetParameters(paramList) with this message: The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. Looking in the server's log file, I can see this: ui!ReportManager_0-8!878!06/02/2010-11:34:36:: Unhandled exception: System.Web.HttpException: The file '/Reports_client/ReportExecution2005.asmx' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResult(HttpContext context, VirtualPath virtualPath) at System.Web.UI.WebServiceParser.GetCompiledType(String inputFile, HttpContext context) at System.Web.Services.Protocols.WebServiceHandlerFactory.GetHandler(HttpContext context, String verb, String url, String filePath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) I didn't find any resource on the web that fits my problem. Does anyone have a clue ? I'm able to view the reports from a web application, so I'm sure the server is running. Thanks in advance.

    Read the article

  • SSL authentication error: RemoteCertificateChainErrors on ASP.NET on Ubuntu

    - by Frank Krueger
    I am trying to access Gmail's SMTP service from an ASP.NET MVC site running under Mono 2.4.2.3. But I keep getting this error: System.InvalidOperationException: SSL authentication error: RemoteCertificateChainErrors at System.Net.Mail.SmtpClient.m__3 (System.Object sender, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, SslPolicyErrors sslPolicyErrors) [0x00000] at System.Net.Security.SslStream+c__AnonStorey9.m__9 (System.Security.Cryptography.X509Certificates.X509Certificate cert, System.Int32[] certErrors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.OnRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslStreamBase.RaiseRemoteCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] errors) [0x00000] at Mono.Security.Protocol.Tls.SslClientStream.RaiseServerCertificateValidation (System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Int32[] certificateErrors) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.validateCertificates (Mono.Security.X509.X509CertificateCollection certificates) [0x00000] at Mono.Security.Protocol.Tls.Handshake.Client.TlsServerCertificate.ProcessAsTls1 () [0x00000] at Mono.Security.Protocol.Tls.Handshake.HandshakeMessage.Process () [0x00000] at (wrapper remoting-invoke-with-check) Mono.Security.Protocol.Tls.Handshake.HandshakeMessage:Process () at Mono.Security.Protocol.Tls.ClientRecordProtocol.ProcessHandshakeMessage (Mono.Security.Protocol.Tls.TlsStream handMsg) [0x00000] at Mono.Security.Protocol.Tls.RecordProtocol.InternalReceiveRecordCallback (IAsyncResult asyncResult) [0x00000] I have installed certificates using: certmgr -ssl -m smtps://smtp.gmail.com:465 with this output: Mono Certificate Manager - version 2.4.2.3 Manage X.509 certificates and CRL from stores. Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. X.509 Certificate v3 Issued from: C=US, O=Equifax, OU=Equifax Secure Certificate Authority Issued to: C=US, O=Google Inc, CN=Google Internet Authority Valid from: 06/08/2009 20:43:27 Valid until: 06/07/2013 19:43:27 *** WARNING: Certificate signature is INVALID *** Import this certificate into the CA store ?yes X.509 Certificate v3 Issued from: C=US, O=Google Inc, CN=Google Internet Authority Issued to: C=US, S=California, L=Mountain View, O=Google Inc, CN=smtp.gmail.com Valid from: 04/22/2010 20:02:45 Valid until: 04/22/2011 20:12:45 Import this certificate into the AddressBook store ?yes 2 certificates added to the stores. In fact, this worked for a month but mysteriously stopped working on May 5. I installed these new certs today, but I am still getting these errors.

    Read the article

  • prevent outlook stationery from showing up in my email (Outlook 2007)

    - by KevinDeus
    There are some people in my office who insist on using cute stationery and some of it makes messages difficult to read. I really just want to read email on a white background with no distractions. Is there a way to disable stationery on incoming mail in Outlook? (Without switching to "plain text only") yeah, I yanked that description from here but it is very accurate however I've had no luck in finding a solution. Most solutions I see solve the problem by pushing out something to a bunch of users. like : this I don't really have the authority to do that. Not only that, that only prevents ME from setting stationery. this has been asked before to no avail: I don't have time to deal with this, so hopefully there is something I have overlooked. Without switching to "plain text only" I want to be able to change a setting on my computer (it can be. a reg hack, I don't care) that will prevent outlook stationery from showing up in my email it would also be helpful to know how to do it for Outlook 2003 as well.

    Read the article

  • "Win32 exception occurred releasing IUnknown at..." error using Pylons and WMI

    - by Anders
    Hi all, Im using Pylons in combination with WMI module to do some basic system monitoring of a couple of machines, for POSIX based systems everything is simple - for Windows - not so much. Doing a request to the Pylons server to get current CPU, however it's not working well, or atleast with the WMI module. First i simply did (something) this: c = wmi.WMI() for cpu in c.Win32_Processor(): value = cpu.LoadPercentage However, that gave me an error when accessing this module via Pylons (GET http://ip:port/cpu): raise x_wmi_uninitialised_thread ("WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex]") x_wmi_uninitialised_thread: <x_wmi: WMI returned a syntax error: you're probably running inside a thread without first calling pythoncom.CoInitialize[Ex] (no underlying exception)> Looking at http://timgolden.me.uk/python/wmi/tutorial.html, i wrapped the code accordingly to the example under the topic "CoInitialize & CoUninitialize", which makes the code work, but it keeps throwing "Win32 exception occurred releasing IUnknown at..." And then looking at http://mail.python.org/pipermail/python-win32/2007-August/006237.html and the follow up post, trying to follow that - however pythoncom._GetInterfaceCount() is always 20. Im guessing this is someway related to Pylons spawning worker threads and crap like that, however im kinda lost here, advice would be nice. Thanks in advance, Anders

    Read the article

  • String formatting in cocoa

    - by lostInTransit
    Hi I am trying to send some text in an email from my cocoa app (by using Mail.app). Initially I tried using HTML to send properly formatted text. But the mailto: URL does not support html tags (even after setting headers) So I decided to use formatted string (left-aligning of string) This is what I have in my mailto: link's body argument NSMutableString *emailBody = [NSMutableString stringWithFormat:@"|%-35s", [@"Name" UTF8String]]; [emailBody appendFormat:@"|%-18s", [@"Number" UTF8String]]; [emailBody appendString:@"|Notes\n"]; [emailBody appendString:@"----------------------------------------------------------------------------------------------------"]; for(int i = 0; i < [items count]; i++){ NSDictionary *props = [items objectAtIndex:i]; NSMutableString *emailData = [NSMutableString stringWithFormat:@"|%-35s", [[props valueForKey:@"name"] UTF8String]]; [emailData appendFormat:@"|$ %-16s", [[props valueForKey:@"number"] UTF8String]]; [emailData appendString:[props valueForKey:@"notes"]]; [emailBody appendString:@"\n"]; [emailBody appendString:emailData]; } This does give me padded text but they all don't necessarily take up the same space (for instance if there is an O in the text, it takes up more space than others and ruins the formatting) Is there another sure-shot way to format text using just NSString? Thanks

    Read the article

  • java.lang.IllegalStateException: Neither BindingResult nor plain target object for bean name 'editFo

    - by mansoor
    i am working with spring frame work. Problem is that on forgot password jsp page when i enter the right name or email address the funtion send a mail to the user. But i want when user not enter username or password the calling funtion at controoler return a exception called bind exception and return not in getSuccessView() but redirct at same page and print the Invalid user name or email id. the controller code is following public class EditForgotPasswordFormController extends AbstractBaseFormController{ UserManager userManager; @Override protected ModelAndView executeOnSubmit(ExtendedHttpServletRequest request, ExtendedHttpServletResponse response, Object command, BindException errors) throws Exception { EditForgotPassword e=(EditForgotPassword)command; String s = e.getUsernameOrEmail(); try{ userManager.getNewPassword(s); }catch(RuntimeException exp) { errors.reject("Invailid Username Or EmailId",s); return new ModelAndView("normal/profile/forgot_password"); } return new ModelAndView(getSuccessView()) .addObject("heading", "Please Check You Email") .addObject("message", "Your New Account Information Is sent To At Your Email Id. "); } public UserManager getUserManager() { return userManager; } public void setUserManager(UserManager userManager) { this.userManager = userManager; } } when i am not enter the name or email id the i want redirct at following as indicate above and print message on same jsp where from request is recived........

    Read the article

  • Emacs 23.1 make error 139 in Mac OS X 10.6.3

    - by penyuan
    When I try to compile GNU Emacs 23.1 on my machine with Mac OS X 10.6.3 I repeatedly get the following ending: Directories: /src/emacs-23.1/lisp/. /src/emacs-23.1/lisp/./calc /src/emacs-23.1/lisp/./calendar /src/emacs-23.1/lisp/./emacs-lisp /src/emacs-23.1/lisp/./emulation /src/emacs-23.1/lisp/./erc /src/emacs-23.1/lisp/./eshell /src/emacs-23.1/lisp/./gnus /src/emacs-23.1/lisp/./international /src/emacs-23.1/lisp/./language /src/emacs-23.1/lisp/./mail /src/emacs-23.1/lisp/./mh-e /src/emacs-23.1/lisp/./net /src/emacs-23.1/lisp/./nxml /src/emacs-23.1/lisp/./org /src/emacs-23.1/lisp/./play /src/emacs-23.1/lisp/./progmodes /src/emacs-23.1/lisp/./textmodes /src/emacs-23.1/lisp/./url /bin/sh: line 1: 69491 Segmentation fault EMACSLOADPATH=/src/emacs-23.1/lisp LC_ALL=C ../src/bootstrap-emacs -batch --no-site-file --multibyte -l autoload --eval '(setq generated-autoload-file "/src/emacs-23.1/lisp/loaddefs.el")' -f batch-update-autoloads $wins make[2]: *** [autoloads] Error 139 make[1]: *** [/src/emacs-23.1/src/../lisp/loaddefs.el] Error 2 make: *** [src] Error 2 Does anyone know what this means and what I could do to resolve the issue? By the way, here is my ./configure settings: ./configure --prefix=/usr/local --x-includes=/usr/X11/include --x-libraries=/usr/X11/lib --with-x I've tried to compile both with and without X with no success.

    Read the article

  • Apk Expansion Files - Application Licensing - Developer account - NOT_LICENSED response

    - by mUncescu
    I am trying to use the APK Expansion Files extension for Android. I have uploaded the APK to the server along with the extension files. If the application was previously published i get a response from the server saying NOT_LICENSED: The code I use is: APKExpansionPolicy aep = new APKExpansionPolicy(mContext, new AESObfuscator(getSALT(), mContext.getPackageName(), deviceId)); aep.resetPolicy(); LicenseChecker checker = new LicenseChecker(mContext, aep, getPublicKey(); checker.checkAccess(new LicenseCheckerCallback() { @Override public void allow(int reason) { @Override public void dontAllow(int reason) { try { switch (reason) { case Policy.NOT_LICENSED: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_UNLICENSED); break; case Policy.RETRY: mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); break; } } finally { setServiceRunning(false); } } @Override public void applicationError(int errorCode) { try { mNotification.onDownloadStateChanged(IDownloaderClient.STATE_FAILED_FETCHING_URL); } finally { setServiceRunning(false); } } }); So if the application wasn't previously published the Allow method is called. If the application was previously published and now it isn't the dontAllow method is called. I have tried: http://developer.android.com/guide/google/play/licensing/setting-up.html#test-response Here it says that if you use a developer or test account on your test device you can set a specific response, I use LICENSED as response and still get NOT_LINCESED. Resetting the phone, clearing google play store cache, application data. Changing the versioncode number in different combinations still doesn't work. Edit: In case someone else was facing this problem I received an mail from the google support team We are aware that newly created accounts for testing in-app billing and Google licensing server (LVL) return errors, and are working on resolving this problem. Please stay tuned. In the meantime, you can use any accounts created prior to August 1st, 2012, for testing. So it seems to be a problem with their server, if I use the main developer thread everything works fine.

    Read the article

  • Add comment programmatically in Drupal 7

    - by volocuga
    trying to create a comment in own module. $comment = new stdClass(); $comment->nid = 555; // Node Id the comment will attached to $comment->cid = 0; $comment->pid = 0; $comment->uid = 1; $comment->mail = '[email protected]'; $comment->name = 'admin'; $comment->is_anonymous = 0; $comment->homepage = ''; $comment->status = COMMENT_PUBLISHED; $comment->language = LANGUAGE_NONE; $comment->subject = 'Comment subject'; $comment->comment_body[$comment->language][0]['value'] = 'Comment body text'; $comment->comment_body[$comment->language][0]['format'] = 'filtered_html'; comment_submit($comment); comment_save($comment); The code causes the following error: Fatal error: Call to undefined function node_load() in BLA/BLA/comment.module on line 1455 node_load() function is in node module which, of course, enabled. How to fix it? Thanks!

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >