Search Results

Search found 3300 results on 132 pages for 'permission'.

Page 115/132 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Building a personal website using Silverlight.

    - by mbcrump
    I’ve always believed that as a developer you should always have a hobby project going on. I think a hobby project needs to contain at least one of following things: Something that you have never done before. Something that you are interested in. Something that you can work on in your spare time without affecting your *paying* job. I decided my hobby project would be an entire web application written in Silverlight that could be used as a self-promotion/marketing tool. This goal of the site is to provide information on the work that I’ve done to conferences, future employers and anyone else that wanted to learn more about me. Before I go any further, if you just want to check out the site then it is located at http://michaelcrump.info. So, what did I use to create it? MVVM Light – I’m a big fan of this software. The item and project templates plus code snippets make this a huge win for any SL/WPF/WP7 application. Jetpack Theme by Microsoft – I suck at designing so I used this template to help speed up this project. ComponentOne 3rd Party Controls – I have a license and really like several of their products. A User Control that Jeremy Likness created called DynamicXaml (used with his permission). I had created my own version of this a while back, but Jeremy’s implementation was simply better. Main Page – Designed to create my “brand”. This was built for a quick glimpse of who I am and what do I do.  Blog – The best marketing tool for a developer is their blog. I decided to go with an HTML page displaying my site and the user could pop into full-screen if desired. I also included my feed and Silverlight-Zone. (Another site I work on) Online – This page links to sites that I have been featured on as well as community involvement and awards. I also have a web service that I can update this information without re-compiling the Silverlight App. Projects – I’ve been wanting to use a CoverFlow for a really long time now. =) This page list several hobby projects as well as a few professional projects.  Resume Page – This page only exist because I got tired of sending companies my resume in e-mail. I can now provide a deep link to this page and the recruiter can print, search or save my resume. The PDF of my resume exist in a folder that I can easily update without recompiling the app. Contact Page – Just a contact page with a web service that sends the email. The Send button becomes disabled after a successful send. I thought of adding captcha to this page but in the end didn’t think it was worth it. Looking back at this app, I’m happy with how it turned out. I love Silverlight and I am already thinking of my next hobby project. (Thinking another Windows Phone 7 app or MVC3).  Subscribe to my feed

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Importance of User Without Login

    - by pinaldave
    Some questions are very open ended and it is very hard to come up with exact requirements. Here is one question I was asked in recent User Group Meeting. Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Answer: Let us visualize a scenario. An application has lots of different operations and many of them are very sensitive operations. The common practice was to do give application specific role which has more permissions and access level. When a regular user login (not system admin), he/she might have very restrictive permissions. The application itself had a user name and password which means applications can directly login into the database and perform the operation. Developers were well aware of the username and password as it was embedded in the application. When developer leaves the organization or when the password was changed, the part of the application had to be changed where the same username and passwords were used. Additionally, developers were able to use the same username and password and login directly to the same application. In earlier version of SQL Server there were application roles. The same is later on replaced by “User without Login”. Now let us recreate the above scenario using this new “User without Login”. In this case, User will have to login using their own credentials into SQL Server. This means that the user who is logged in will have his/her own username and password. Once the login is done in SQL Server, the user will be able to use the application. Now the database should have another User without Login which has all the necessary permissions and rights to execute various operations. Now, Application will be able to execute the script by impersonating “user without login – with more permissions”. Here there is assumed that user login does not have enough permissions and another user (without login) there are more rights. If a user knows how the application is using the database and their various operations, he can switch the context to user without login making him enable for doing further modification. Make sure to explicitly DENY view definition permission on the database. This will make things further difficult for user as he will have to know exact details to get additional permissions. If a user is System Admin all the details which I just mentioned in above three paragraphs does not apply as admin always have access to everything. Additionally, the method describes above is just one of the architecture and if someone is attempting to damage the system, they will still be able to figure out a workaround. You will have to put further auditing and policy based management to prevent such incidents and accidents. I guess this is my answer. I read it multiple times but I still feel that I am missing something. There should be more to this concept than what I have just described. I have merely described one scenario but there will be many more scenarios where this situation will be useful. Now is your turn to help – please leave a comment with the additional suggestion where exactly “User without Login” will be useful as well did I miss anything when I described above scenario. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • JavaOne User Group Sunday

    - by Tori Wieldt
    Before any "official" sessions of JavaOne 2012, the Java community was already sizzling. User Group Sunday was a great success, with several sessions offered by Java community members for anyone wanting to attend. Sessions were both about Java and best practices for running a JUG. Technical sessions included "Autoscaling Web Java Applications: Handle Peak Traffic with Zero Downtime and Minimized Cost,"  "Using Java with HTML5 and CSS3," and "Gooey and Sticky Bits: Everything You Ever Wanted to Know About Java." Several sessions were about how to start and run a JUG, like "Getting Speakers, Finding Sponsors, Planning Events: A Day in the Life of a JUG" and "JCP and OpenJDK: Using the JUGs’ “Adopt” Programs in Your Group." Badr ElHouari and Faiçal Boutaounte presented the session "Why Communities Are Important and How to Start One." They used the example of the Morocco JUG, which they started. Before the JUG, there was no "Java community," they explained. They shared their best practices, including: have fun, enjoy what you are doing get a free venue to have regular meetings, a University is a good choice run a conference, it gives you visibility and brings in new members students are a great way to grow a JUG Badr was proud to mention JMaghreb, a first-time conference that the Morocco JUG is hosting in November. They have secured sponsors and international speakers, and are able to offer a free conference for Java developers in North Africa. The session also included a free-flowing discussion about recruiters (OK to come to meetings, but not to dominate them), giving out email addresses (NEVER do without permission), no-show rates (50% for free events) and the importance of good content (good speakers really help!). Trisha Gee, member of the London Java Community (LJC) was one of the presenters for the session "Benefits of Open Source." She explained how open sourcing the LMAX Disruptor (a high performance inter-thread messaging library) gave her company LMAX several benefits, including more users, more really good quality new hires, and more access to 3rd party companies. Being open source raised the visibility of the company and the product, which was good in many ways. "We hired six really good coders in three months," Gee said. They also got community contributors for their code and more cred with technologists. "We had been unsuccessful at getting access to executives from other companies in the high-performance space. But once we were open source, the techies at the company had heard of us, knew our code was good, and that opened lots of doors for us." So, instead of "giving away the secret sauce," by going open source, LMAX gained many benefits. "It was a great day," said Bruno Souza, AKA The Brazilian Java Man, "the sessions were well attended and there was lots of good interaction." Sizzle and steak!

    Read the article

  • SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS)

    - by pinaldave
    Here is the email which made me write this blog post. When I write a blog post I write keeping in mind that if the developer is not familiar with the concept he will attempt this on the development server. If due to any reason you attempt it on any other server than your personal server, developer should make sure to have complete confidence on his own expertise and understand the risk behind it.  Well, let us read the email which I received. I have modified it a bit to remove information related to organizational and individual. “I just read your blog post on Beginning DQS. I went ahead and followed every single screenshot and it worked fine. I was able to execute the DQS project successfully. However, the same blog post got me in trouble – a serious trouble. After first successful deployment I went ahead and created a few of my own knowledge base and projects. I played around a bit and then decided to get back to real work. Now we had deployed DQS on production server only, so experiment on production server. Now, when I got back to my work, I forgot to close all the windows. My manager found the window open and have seen my test projects. He has asked me to delete my experiments immediately and have said words which I cannot write to you. Here is the problem. I am not able to delete the project which I have created earlier. I am able to open it and play with it but the delete option is disabled and grayed out (see attached image). Now I believe there is nothing wrong with this project as it was just a test project. Would you please write to my manager that it is not harmful to leave that project there as it is? It is also not using any resources. I think he will believe you.” As I said this kind of email makes me uncomfortable. I do not want someone to execute anything on production server. I often write notes and disclaimer on my post when something is dangerous to execute on production server. However, if someone is not expert with SQL Server and attempts something new on production server, I think the major issue is here with the person (admin) who gave new developer permission to production server. This has to be carefully avoided. Here was my response to the individual. “I cannot write to your manager anything as he has not asked me anything. Honestly I believe he is correct in his behavior as you should have not executed anything on the production server without prior approval and testing on the development server. Any R&D must be done on local box or development box. I suggest you request your manager to prevent access to users who does not need access. If he is a good manager, he might have already implemented by now recent event. I also see your screenshot. Here is the issue: While you were playing with project, you might have closed the project half the way, without completing it. Due to the same reason it is locked. You can open and continue from the same place where you have left the project. If you do not need the project any more. Right click on it, click on unlock the project. This will enable the DELETE option and now you can delete the project. Next time, be safe out there. It may be dangerous to have admin access to production server when not needed.“ I have yet not heard from him but I believe he will take my words positively. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Error while updating

    - by Alwin Doss
    I get the following error while updating ubuntu 12.04 LTS installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 430284 files and directories currently installed.) Preparing to replace libxml2-dev 2.7.8.dfsg-5.1ubuntu4.1 (using .../libxml2-dev_2.7.8.dfsg-5.1ubuntu4.2_i386.deb) ... Unpacking replacement libxml2-dev ... Preparing to replace libxml2 2.7.8.dfsg-5.1ubuntu4.1 (using .../libxml2_2.7.8.dfsg-5.1ubuntu4.2_i386.deb) ... Unpacking replacement libxml2 ... Preparing to replace gstreamer0.10-plugins-bad 0.10.22.3-2ubuntu2 (using .../gstreamer0.10-plugins-bad_0.10.22.3-2ubuntu2.1_i386.deb) ... Unpacking replacement gstreamer0.10-plugins-bad ... Preparing to replace libgstreamer-plugins-bad0.10-0 0.10.22.3-2ubuntu2 (using .../libgstreamer-plugins-bad0.10-0_0.10.22.3-2ubuntu2.1_i386.deb) ... Unpacking replacement libgstreamer-plugins-bad0.10-0 ... Preparing to replace ubuntu-keyring 2011.11.21 (using .../ubuntu-keyring_2011.11.21.1_all.deb) ... /var/lib/dpkg/info/samba4.postinst: 14: /var/lib/dpkg/info/samba4.postinst: /usr/share/samba/setoption.pl: Permission denied dpkg: error processing samba4 (--configure): subprocess installed post-installation script returned error exit status 126

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Too complex/too many objects?

    - by Mike Fairhurst
    I know that this will be a difficult question to answer without context, but hopefully there are at least some good guidelines to share on this. The questions are at the bottom if you want to skip the details. Most are about OOP in general. Begin context. I am a jr dev on a PHP application, and in general the devs I work with consider themselves to use many more OO concepts than most PHP devs. Still, in my research on clean code I have read about so many ways of using OO features to make code flexible, powerful, expressive, testable, etc. that is just plain not in use here. The current strongly OO API that I've proposed is being called too complex, even though it is trivial to implement. The problem I'm solving is that our permission checks are done via a message object (my API, they wanted to use arrays of constants) and the message object does not hold the validation object accountable for checking all provided data. Metaphorically, if your perm containing 'allowable' and 'rare but disallowed' is sent into a validator, the validator may not know to look for 'rare but disallowed', but approve 'allowable', which will actually approve the whole perm check. We have like 11 validators, too many to easily track at such minute detail. So I proposed an AtomicPermission class. To fix the previous example, the perm would instead contain two atomic permissions, one wrapping 'allowable' and the other wrapping 'rare but disallowed'. Where previously the validator would say 'the check is OK because it contains allowable,' now it would instead say '"allowable" is ok', at which point the check ends...and the check fails, because 'rare but disallowed' was not specifically okay-ed. The implementation is just 4 trivial objects, and rewriting a 10 line function into a 15 line function. abstract class PermissionAtom { public function allow(); // maybe deny() as well public function wasAllowed(); } class PermissionField extends PermissionAtom { public function getName(); public function getValue(); } class PermissionIdentifier extends PermissionAtom { public function getIdentifier(); } class PermissionAction extends PermissionAtom { public function getType(); } They say that this is 'not going to get us anything important' and it is 'too complex' and 'will be difficult for new developers to pick up.' I respectfully disagree, and there I end my context to begin the broader questions. So the question is about my OOP, are there any guidelines I should know: is this too complicated/too much OOP? Not that I expect to get more than 'it depends, I'd have to see if...' when is OO abstraction too much? when is OO abstraction too little? how can I determine when I am overthinking a problem vs fixing one? how can I determine when I am adding bad code to a bad project? how can I pitch these APIs? I feel the other devs would just rather say 'its too complicated' than ask 'can you explain it?' whenever I suggest a new class.

    Read the article

  • WebLogic stuck thread protection

    - by doublep
    By default WebLogic kills stuck threads after 15 min (600 s), this is controlled by StuckThreadMaxTime parameter. However, I cannot find more details on how exactly "stuckness" is defined. Specifically: What is the point at which 15 min countdown begins. Request processing start? Last wait()-like method? Something else? Does this apply only to request-processing threads or to all threads? I.e. can a request-processing thread "escape" this protection by spawning a worker thread for a long task? Especially, can it delegate response writing to such a worker without 15 min countdown? My usecase is download of huge files through a permission system. Since a user needs to be authenticated and have permissions to view a file, I cannot (or at least don't know how) leave this to a simple HTTP server, e.g. Apache. And because files can be huge, download could (at least in theory) take more than 15 minutes.

    Read the article

  • ABCpdf7 Not Rendering Images using AddImageUrl

    - by ddango
    Fairly exotic it seems to me. We recently upgraded/migrated from Windows Server 2003 to 2008, and now it seems that images cannot be rendered when using Doc.AddImageUrl(). (when the pdf is saved, the images appear at the correct dimensions, but the IE8 missing image x shows up). If I understand correctly, ABCpdf uses IE rendering internally for this sort of thing. We thought it might be a permission issue, but we've check IE ESC and that seems to be configured as they suggest. Has anyone else run into a similar problem? Perhaps a code configuration is needed? Not the entire snippet, but the ABCpdf7 stuff: using (Doc doc = new Doc()) { doc.HtmlOptions.PageCacheEnabled = false; doc.HtmlOptions.UseNoCache = true; doc.HtmlOptions.PageCacheClear(); doc.HtmlOptions.PageCachePurge(); doc.HtmlOptions.UseResync = true; doc.HtmlOptions.ImageQuality = 25; int pageID = doc.AddImageUrl(url + "&guid=" + url.GetHashCode()); while (true) { if (!doc.Chainable(pageID)) break; doc.Page = doc.AddPage(); pageID = doc.AddImageToChain(pageID); } // file saving etc. }

    Read the article

  • Exchange Web Service (EWS) call fails under ASP.NET but not a console application

    - by Vince Panuccio
    I'm getting an error when I attempt to connect to Exchange Web Services via ASP.NET. The following code works if I call it via a console application but the very same code fails when executed on a ASP.NET web forms page. Just as a side note, I am using my own credentials throughout this entire code sample. "When making a request as an account that does not have a mailbox, you must specify the mailbox primary SMTP address for any distinguished folder Ids." I thought I might be able to fix the issue by specifying an impersonated user. exchangeservice.ImpersonatedUserId = new ImpersonatedUserId(ConnectingIdType.SmtpAddress, "[email protected]"); But then I get a different error. "The account does not have permission to impersonate the requested user." The App Pool that the web application is running under is also my own account (same as the console application) so I have no idea what might be causing this issue. I am using .NET framework 3.5. Here is the code in full. var exchangeservice = new ExchangeService(ExchangeVersion.Exchange2010_SP1) { Timeout = 10000 }; var credentials = new System.Net.NetworkCredential("username", "pass", "domain"); exchangeservice.AutodiscoverUrl("[email protected]") FolderId rootFolderId = new FolderId(WellKnownFolderName.Inbox); var folderView = new FolderView(100) { Traversal = FolderTraversal.Shallow }; FindFoldersResults findFoldersResults = service.FindFolders(rootFolderId, folderView);

    Read the article

  • C# WebBrowser.ShowPrintDialog() not showing

    - by jeah_wicer
    I have this peculiar problem while wanting to print a html-report. The file itself is a normal local html file, located on my hard drive. To do this, I have tried the following: public static void PrintReport(string path) { WebBrowser wb = new WebBrowser(); wb.Navigate(path); wb.ShowPrintDialog() } And I have this form with a button with the click event: private void button1_Click(object sender, EventArgs e) { string path = @"D:\MyReport.html"; PrintReport(path); } This does absolutely nothing. Which is kind of strange... but things get stranger... When editing the print function to do the following: public static void PrintReport(string path) { WebBrowser wb = new WebBrowser(); wb.Navigate(path); MessageBox.Show("TEST"); wb.ShowPrintDialog() } It works. Yes, only adding a MessageBox. The MessageBox is showing and after it comes the print dialog. I have also tried with Thread.Sleep(1000) instead, which doesn't work. Can anyone explain to me what's going on here? Why would a messagebox make any difference? Can it be some kind of permission problem? I've reproduced this on both Windows 7 and 8, same thing. I made this small application with only the above code to isolate the problem. I am quite sure it works on windows XP though, since an older version of the application I'm working on runs on it. When trying to do this directly with the mshtml-dll instead I also get problems. Any input or clarification is greatly appreciated!

    Read the article

  • Deleting file with SharePoint List web service fails

    - by Robert MacLean
    I am trying to delete a file from SharePoint using the list web service which is failing with the following error. Error Code: 0x81020030 Message: Invalid file name Detail: The file name you specified could not be used. It may be the name of an existing file or directory, or you may not have permission to access the file. The update XML I sent through is: <Batch OnError="Continue" PreCalc="TRUE" ListVersion="0" ViewName="{8FE4E2C8-939E-4462-ABA2-D633EED7F76E}"><Method ID="1" Cmd="Delete"><Field Name="ID">84</Field><Field Name="FileRef">http://win-4h0xp59sn75:40414/Shared%20Documents/del.txt</Field></Method></Batch> The SharePoint server error logs indicate: ERROR: Failed to OpenThreadToken, LastError=1008 The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators. Things I have tried I've tried the changes in #1372971 which has no helped. I have also tried the changes recommended on the Microsoft Social site, which has also not helped. I have confirmed that the txt file extension is not blocked as indicated here. In addition I can remove the file via the website, it is just on the web service that this fails. The permissions are correct (or rather not in play) as I am running as a SharePoint administrator, which is the same account that uploaded it via the copy web service.

    Read the article

  • ASP.NET PowerShell Impersonation

    - by Ben
    I have developed an ASP.NET MVC Web Application to execute PowerShell scripts. I am using the VS web server and can execute scripts fine. However, a requirement is that users are able to execute scripts against AD to perform actions that their own user accounts are not allowed to do. Therefore I am using impersonation to switch the identity before creating the PowerShell runspace: Runspace runspace = RunspaceFactory.CreateRunspace(config); var currentuser = WindowsIdentity.GetCurrent().Name; if (runspace.RunspaceStateInfo.State == RunspaceState.BeforeOpen) { runspace.Open(); } I have tested using a domain admin account and I get the following exception when calling runspace.Open(): Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Requested registry access is not allowed. The web application is running in full trust and I have explicitly added the account I am using for impersonation to the local administrators group of the machine (even though the domain admins group was already there). I'm using advapi32.dll LogonUser call to perform the impersonation in a similar way to this post (http://blogs.msdn.com/webdav_101/archive/2008/09/25/howto-calling-exchange-powershell-from-an-impersonated-thead.aspx) Any help appreciated as this is a bit of a show stopper at the moment. Thanks Ben

    Read the article

  • The SaveAs method is configured to require a rooted path, and the path <blah> is not rooted.

    - by Jen
    OK I've seen a few people with this issue - but I'm already using a file path, not a relative path. My code works fine on my PC, but when another developer goes to upload an image they get this error. I thought it was a security permission thing on the folder - but the system account has full access to the folder (though I get confused about how to test which account the application is running under). Also usually running locally doesn't often give you too many security issues :) A code snippet: Guid fileIdentifier = Guid.NewGuid(); CurrentUserInfo currentUser = CMSContext.CurrentUser; string identifier = currentUser.UserName.Replace(" ", "") + "_" + fileIdentifier.ToString(); string fileName1 = System.IO.Path.GetFileName(fileUpload.PostedFile.FileName); string Name = System.IO.Path.GetFileNameWithoutExtension(fileName1); string renamedFile = fileName1.Replace(Name, identifier); string path = ConfigurationManager.AppSettings["MemberPhotoRepository"]; String filenameToWriteTo = path + currentUser.UserName.Replace(" ", "") + fileName1; fileUpload.PostedFile.SaveAs(filenameToWriteTo); Where my config settings value is: Again this works fine on my PC! (And I checked the other person has the folder on their PC). Any suggestions would be appreciated - thanks :)

    Read the article

  • IIS 7.5, ASP.NET, impersonation, and access to C:\Windows\Temp

    - by Heinzi
    Summary: One of our web applications requires write access to C:\Windows\Temp. However, no matter how much I weaken the NTFS permission, procmon shows ACCESS DENIED. Background (which might or might not be relevant for the problem): We are using OLEDB to access an MS Access database (which is located outside of C:\Windows\Temp). Unfortunately, this OLEDB driver requires write access to the user profile's TEMP directory (which happens to be C:\Windows\Temp when running under IIS 7.5), otherwise the dreaded "Unspecified Error" OleDbException is thrown. See KB 926939 for details. I followed the steps in the KB article, but it doesn't help. Details: This is the output of icacls C:\Windows\Temp. For debugging purposes I gave full permissions to Everyone. C:\Windows\Temp NT AUTHORITY\SYSTEM:(OI)(CI)(F) CREATOR OWNER:(OI)(CI)(IO)(F) BUILTIN\IIS_IUSRS:(OI)(CI)(S,RD) BUILTIN\Users:(CI)(S,WD,AD,X) BUILTIN\Administrators:(OI)(CI)(F) Everyone:(OI)(CI)(F) However, this is the screenshot of procmon: Desired Access: Generic Read/Write, Delete Disposition: Create Options: Synchronous IO Non-Alert, Non-Directory File, Random Access, Delete On Close, Open No Recall Attributes: NT ShareMode: None AllocationSize: 0 Impersonating: MYDOMAIN\myuser

    Read the article

  • "You do not have permissions to do this operation. Ask your website administrator to change your pem

    - by user288728
    I need some help regarding the above title. After applying SP1 pack on our SharePoint Server 2007 (MOSS 3.0) the workflows aren't working anymmore on one of the WebApplications (located in SSP2) Workflows work, however on other existing applications (on SSP1 or SSP2) or newly created applications (on SSP1 or SSP2). Details of error: 1. Default / Built in SharePoint Workflows are not starting 2. Sharepoint Designer Workflows throwing the following error message: "You do not have permissions to do this operation. Ask your website administrator to change your pemissions and then try again, or log on with another account that has this permission. To log on with another user account click ok. " 3. Before applying the SP1 on MOSS the workflows were working perfectly fine on this WebApplication. so far: 1. I am logged in as Administrator (bot on Sharepoint site or when working with Sharepoint Designer). 2. Administrator is included in the list Site Collection Administrators 3. I noticed in Sharepoint designer that the username used for creating the workflow is 'converted' into SHAREPOINT\system (in the Modified By Column). Has anybody come accross/fixed this error? Any help much appreciated. Thanks D

    Read the article

  • Passenger, Nginx, and Capistrano - Passenger not launching Rails app at all

    - by Throlkim
    Essentially, my route is working perfectly, Passenger seems to be loading - all is hunky-dory. Except that nothing Railsy happens. Here's my Nginx log from starting the server to the first request (ignore the different domain/route - it's because I haven't moved the new domain over yet, and it's returning a 403 error because there's no index file in the public folder): [ pid=24559 file=ext/nginx/HelperServer.cpp:826 time=2009-11-10 00:49:13.227 ]: Passenger helper server started on PID 24559 [ pid=24559 file=ext/nginx/HelperServer.cpp:831 time=2009-11-10 00:49:13.227 ]: Password received. 2009/11/10 00:49:53 [error] 24578#0: *1 directory index of "/var/www/***/current/public/" is forbidden, client: 188.221.195.27, server: ***, request: "GET / HTTP/1.1", host: "***" 2009/11/10 00:49:54 [error] 24578#0: *1 open() "/var/www/***/current/public/favicon.ico" failed (2: No such file or directory), client: 188.221.195.27, server: ***, request: "GET /favicon.ico HTTP/1.1", host: "***", referrer: "***" Someone on the RubyOnRails IRC channel suggested that it might be a webserver permissions problem. I had a suspicion that it might be a filesystem permission problem, but then Nginx runs as www-data and Passenger as root. I can load static files from within the public directory fine, but no Rails application is being launched. Does anyone have an idea? My head is gradually melting away figuring this one out! Edit: Here's the vhost file: server { listen 80; server_name ***; passenger_enabled on; location / { root /var/www/***/current/public; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }

    Read the article

  • crystal reports : cannot determine the queries necessary to get data for this report

    - by phill
    I've recently come across the following error in one of my crystal reports following an accounting system update. Group #1: ? - A : This group section cannot be printed because its condition field is nonexistent or invalid. Format the section to choose another condition field. I've verified every database field being used to ensure it still exists for consistency and checked the formulas for that section. No dice. So to hopefully fix the problem, i remove the section using the Section expert. I run the same database checks. It then complains with the same error for Group #5. So i remove that as well. now I have a new and unusual error when i attempt to run 'Show Query'. the error is: "Cannot determine the queries necessary to get data for this report" I have tried to logon/logoff the database and verify database. No complaints until i try to run, show query. When I attempt to run the report, it also throws the same error. any ideas? Am I approaching this incorrectly? this is done in crystal reports 10. note: this report is run with the sql sa user to eliminate any permission issues.

    Read the article

  • .NET Remoting Exception not handled Client-Side

    - by DanJo519
    I checked the rest of the remoting questions, and this specific case did not seem to be addressed. I have a .NET Remoting server/client set up. On the server side I have an object with a method that can throw an exception, and a client which will try to call that method. Server: public bool MyEquals(Guid myGuid, string a, string b) { if (CheckAuthentication(myGuid)) { logger.Debug("Request for \"" + a + "\".Equals(\"" + b + "\")"); return a.Equals(b); } else { throw new AuthenticationException(UserRegistryService.USER_NOT_REGISTERED_EXCEPTION_TEXT); } } Client: try { bool result = RemotedObject.MyEquals(myGuid, "cat", "dog"); } catch (Services.Exceptions.AuthenticationException e) { Console.WriteLine("You do not have permission to execute that action"); } When I call MyEquals with a Guid which causes CheckAuthentication to return false, .NET tries to throw the exception and says the AuthenticationException was unhandled. This happens server side. The exception is never marshaled over to the client-side, and I cannot figure out why. All of the questions I have looked at address the issue of an exception being handled client-side, but it isn't the custom exception but a base type. In my case, I can't even get any exception to cross the remoting boundary to the client. Here is a copy of AuthenticationException. It is in the shared library between both server and client. [Serializable] public class AuthenticationException : ApplicationException, ISerializable { public AuthenticationException(string message) : base(message) { } public AuthenticationException(SerializationInfo info, StreamingContext context) : base(info, context) { } #region ISerializable Members void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context) { base.GetObjectData(info, context); } #endregion }

    Read the article

  • Retrieving the COM class factory for component error while generating word document

    - by TheDPQ
    Hello, I am trying to edit a word document from VB.NET using for the most part this code: How to automate Word from Visual Basic .NET to create a new document http://support.microsoft.com/kb/316383 It works fine on my machine but when i publish to the server i get the following error. Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80070005. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Retrieving the COM class factory for component with CLSID {000209FF-0000-0000-C000-000000000046} failed due to the following error: 80070005. The actual error happens when i try to just create a word application object Dim oWord As New Word.Application Using Visual Studio 2008 and VB.NET 3.5. I made a reference to the "Microsoft Word 10.0 Object Library" and i see Interop.Word.dll file in the bin directory. Using MS Office 2003 on development machine and Windows Server 2003 Still fairly new to .NET and don't have much knowledge about window server, but "UnauthorizedAccessException" sounds like a permission issue. I'm wondering if someone could point me in the right direction on what i might need to do to give my little application access to use word. Thank you so much for your time.

    Read the article

  • how to access(read/write) local file system from webkit/javascript?

    - by ganapati hegde
    Hi, i am using Webkitgtk for rendering my HTML pages.Now,say i am browsing the page, i select some text while reading,i want to save/write down the selected text on my local file say /home/localfile.txt. Is any way to access(read/write) local file system using webkit? In case of firefox, i can do like below. try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( "/home/localfile.txt" ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = "my data to be written to the local file"; var result = outputStream.write( output, output.length ); outputStream.close(); The above code is written in a javascript file and the js file is linked to the html file.When some text is selected,the js function will be called and action will be taken place.How can i achieve the same result using Webkit? Thanks...

    Read the article

  • Creating ViewResults outside of Controllers in ASP.NET MVC

    - by Craig Walker
    Several of my controller actions have a standard set of failure-handling behavior. In general, I want to: Load an object based on the Route Data (IDs and the like) If the Route Data does not point to a valid object (ex: through URL hacking) then inform the user of the problem and return an HTTP 404 Not Found Validate that the current user has the proper permissions on the object If the user doesn't have permission, inform the user of the problem and return an HTTP 403 Forbidden If the above is successful, then do something with that object that's action-specific (ie: render it in a view). These steps are so standardized that I want to have reusable code to implement the behavior. My current plan of attack was to have a helper method to do something like this: public static ActionResult HandleMyObject(this Controller controller, Func<MyObject,ActionResult> onSuccess) { var myObject = MyObject.LoadFrom(controller.RouteData). if ( myObject == null ) return NotFound(controller); if ( myObject.IsNotAllowed(controller.User)) return NotAllowed(controller); return onSuccess(myObject); } # NotAllowed() is pretty much the same as this public static NotFound(Controller controller){ controller.HttpContext.Response.StatusCode = 404 # NotFound.aspx is a shared view. ViewResult result = controller.View("NotFound"); return result; } The problem here is that Controller.View() is a protected method and so is not accessible from a helper. I've looked at creating a new ViewResult instance explicitly, but there's enough properties to set that I'm wary about doing so without knowing the pitfalls first. What's the best way to create a ViewResult from outside a particular Controller?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >