Search Results

Search found 22383 results on 896 pages for 'package private'.

Page 570/896 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • Routing and Remote access rule not being applied internally (Windows SBS)

    - by Tim Saunders
    Hi, I have a Microsoft Small Business Server. I have pointed an external domain name to the external fixed IP address for the server. In routing and remote access I have defined a service for our subversion server as follows: Incoming port: 8443 Private address: 192.168.10.5 Outgoing port: 8443 192.168.10.5 is our development server, not the SBS (which is at 192.168.10.1) This rule works correctly if I am not on our internal network. However if I am on the internal network this rule does not get applied. What can I do/set so this rule is applied both internally and externally (so users with laptops et, don't keep having to change the URL by which they access the subversion server) Not sure what other info you may need, so please let me know if more details are required. T

    Read the article

  • Customer Experience and BPM – From Efficiency to Engagement

    - by Ajay Khanna
    Over the last few years, focus of BPM has been mainly to improve the businesses efficiency. To create more efficient processes, to remove bottlenecks, to automate processes. That still holds true and why not? Isn’t BPM all about continuous improvement? BPM facilitates and requires business and IT collaboration. But business also requires working with customer. Do we not want to get close to and collaborate with our customers? This is where Social BPM takes BPM a step further. It not only allows people within an organization to collaborate to design exceptional processes, not only lets them collaborate on resolving a case but also let them engage with the customers. Engaging with customer means, first of all, connecting with them on their terms and turf. Take a new account opening process. Can a customer call you and initiate the process? Can a customer email you, or go to the website and initiate the process? Can they tweet you and initiate the process? Can they check the status of process via any channel they like? Can they take a picture of damaged package delivery and kick-off a returns process from their mobile device, with GIS data? Yes, these are various aspects to consider during process design if the goal is better customer experience and engagement. Of course, we want to be efficient and agile, but the focus here needs to be the customer. Now when the customer is tweeting about your products, posting on Facebook and Yelp about their experience with your company (and your process), you need to seek out that information. You need to gather and analyze the customer’s feedback on the social media and use that information to improve the processes and products. This is an excellent source of product and process ideation. So BPM is no longer only about improving back-office process efficiency, it is moving into a new and exciting phase of improving frontline customer facing processes, customer experience and engagement. Let me know how you think BPM can enhance customer experience.

    Read the article

  • Installed LibreOffice 4 with ppa, how do I remove it and go back to LibreOffice 3?

    - by MMA
    EDIT This question is not at all a duplicate of How to downgrade from LibreOffice 4.0 to 3.6? The above mentioned question talks about downgrading from a specific version of LibreOffice, namely from 4.0 to 3.6. The solutions mentioned are not the ones I am looking for. They will work but I wanted a general solution without using PPA or downloading .deb files for from a higher version to a lower version. The above solutions suggest either downloading .deb files for LibreOffice 3.6 or adding repository for it. Furthermore, some of the answers put out-of-proportion~(applicable for the solution, however) stress on use of synaptic, not general command-line-solution. That made me wonder, at this very moment, if I take a fresh computer, and install Ubuntu 12.04, LibreOffice installation will work without a hitch. Then why I can not install LibreOffice in my 12.04 machine today from simple command line? This answer to my question, clarified everything. I need to use ppa-purge so that this resets all packages from a PPA to the standard versions released for my distribution. Basically it is like a way to restore my system back to the way it was before my installed packages from a PPA. This article further elaborates the idea. The above mentioned answer worked perfectly for me. Actually, this was an education for me since it taught me how do downgrade a package that was added via PPA. I had upgraded from LibreOffice 3 to LibreOffice 4 using the PPA. Now since I found that LibreOffice 4 has some issues, including handling my native language, I want to move back to LibreOffice 3. In order to accomplish this, I removed the LibreOffice config directory from my home and then purged LibreOffice from my machine. sudo apt-get purge libreoffice-* Then I removed the relevant PPA's using the sudo apt-add-repository --remove command. And then ran sudo apt-get update. Now, when I try to install LibreOffice using the command sudo apt-get install libreoffice I get an avalanche of output about unmet dependencies, something like, The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.7-0ubuntu4) but it is not going to be installed (snipped) If I dig the issue further, by using the command, sudo apt-get install libreoffice-core I get The following packages have unmet dependencies: libreoffice-core : Depends: libreoffice-common (> 1:3.5.7) but it is not going to be installed Depends: libexttextcat0 (>= 2.2-8) but it is not going to be installed Depends: ure (>= 3.5.7~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Could you please tell me how do I install LibreOffice 3 in my machine? I am using Ubuntu 12.04 LTS.

    Read the article

  • Rotate canvas along its center based on user touch - Android

    - by Ganapathy
    I want to rotate the canvas circularly on its center axis based on user touch. i want to rotate based on center but its rotating based on top left corner . so i am able to see only 1/4 for rotation of image. any idea.. Like a old phone dialer . I have tried like as follows onDraw(Canvas canvas){ canvas.save(); // do my rotation canvas.rotate(rotation,0,0); canvas.drawBitmap( ((BitmapDrawable)d).getBitmap(),0,0,p ); canvas.restore(); } @Override public boolean onTouchEvent(MotionEvent e) { float x = e.getX(); float y = e.getY(); updateRotation(x,y); mPreviousX = x; mPreviousY = y; invalidate(); } private void updateRotation(float x, float y) { double r = Math.atan2(x - centerX, centerY - y); rotation = (int) Math.toDegrees(r); }

    Read the article

  • How can I restrict SSH access when the source IP is dynamic

    - by Supratik
    Hi I want to protect SSH access to our live web server from all IP's except our office static IP. There are some employees who connects to this live server from their dynamic IP's. So, it is not always possible for me to change in the iptables rule in live server whenever the dynamic IP of the employee changes. I tried to put them in office VPN and allowed only SSH access from office IP but the office connection is slow in compared to our employee's private internet connection, moreover it adds an extra overhead to our office network. Is there any way I can solve this problem ?

    Read the article

  • Which Bliki (Blog+Wiki) solution can you recommend?

    - by asmaier
    I'm searching for a good Bliki solution, meaning a combination of blog and wiki that I can install on my own web space. I would like to be able to write articles in the wiki style much like with media wiki. So I want to use a wiki markup language, have a revision history, comments, internal links to other pages (maybe in other languages) and be able to collaboratively edit the articles. On the other side I would like to have a blog-like view on my articles, showing new articles (and changes to existing articles) in a time ordered fashion. It would be nice if it would be possible to search through the articles and also tag the articles, so one could generate a tag cloud for the articles. A nice feature would also be to be able to order the articles according to views or even a voting system for the articles. Good would also be a permission system to keep certain articles private, showing them only to people logged in to the platform. Apart from these nice to have features an absolute must have feature for the Bliki platform I'm searching is the possibility to handle math equations (written in LaTeX syntax) and display them either as pictures like media wiki or even better using Mathjax. At the moment I'm using a web service called wikiDot which offers some of the mentioned features, however the free version shows to much advertisements, the blog feature is not mature, the design is quite ugly and loading of the page is often slow. So I want to install a Bliki solution on my own webspace. Can you recommend any solution for that?

    Read the article

  • Example of persisting an inheritance relationship using ORM

    - by Schemer
    I have some experience with OOP and RDBs, but very little exposure to web programming. I am trying to understand what non-trivial types of problems are solved by ORM. Of course, I am familiar with the need for data persistence, but I have never encountered a need for persisting relationships between objects, a situation which is indicated in many online articles about ORM. I am not asking about the process of persisting a POJO to a database and restoring it later. Nor am I asking about why ORM frameworks are useful -- or a pain in the butt -- for doing so. I am particularly interested in how the need arises to persist and restore relationships between objects. In various documentation, I have seen many examples of persisting POJOs to a database, but the examples have all been for only very simple objects that are essentially nothing more than records anyway: a constructor, some private fields, and getter/setter methods. The motivation for persisting such an "object-record" seems obvious and trivial. This example: Hibernate ORM Tutorial offers such an example, but goes on to discuss mismatch issues of granularity, inheritance, identity, associations, and navigation that are not motivated by the example. If someone could offer a toy example of an instance where, say, the need arises to persist an inheritance relationship, I would be grateful. This might be blindingly obvious for anyone who has already encountered this situation but I have not and a great deal of searching and reading have not turned up any examples.

    Read the article

  • can't update nvida having error near the end of the install

    - by user94843
    I had just got Ubuntu (first timer to Ubuntu so be very descriptive). I think there a problem with my Nvida update it won't let me update it. This is the name of the update in update manager NVIDIA binary xorg driver, kernel module and VDPAU library. When i attempt to install it, it starts out fine but near the end i get a window titaled package operation failed with these under the details installArchives() failed: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8 Errors were encountered while processing: nvidia-current Error in function: Setting up nvidia-current (295.40-0ubuntu1) ... update-initramfs: deferring update (trigger activated) INFO:Enable nvidia-current DEBUG:Parsing /usr/share/nvidia-common/quirks/put_your_quirks_here DEBUG:Parsing /usr/share/nvidia-common/quirks/dell_latitude DEBUG:Parsing /usr/share/nvidia-common/quirks/lenovo_thinkpad DEBUG:Processing quirk Latitude E6530 DEBUG:Failure to match Gigabyte Technology Co., Ltd. with Dell Inc. DEBUG:Quirk doesn't match DEBUG:Processing quirk ThinkPad T420s DEBUG:Failure to match Gigabyte Technology Co., Ltd. with LENOVO DEBUG:Quirk doesn't match Removing old nvidia-current-295.40 DKMS files... Loading new nvidia-current-295.40 DKMS files... Error! DKMS tree already contains: nvidia-current-295.40 You cannot add the same module/version combo more than once. dpkg: error processing nvidia-current (--configure): subprocess installed post-installation script returned error exit status 3 Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-31-generic Warning: No support for locale: en_US.utf8

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Public DNS Server fails on Windows Amazon EC2

    - by Adroidist
    I have started a new Windows server instance on Amazon EC2. The security group has the following rules: Ports Protocol Source 22 tcp 0.0.0.0/0 80 tcp 0.0.0.0/0 443 tcp 0.0.0.0/0 3389 tcp 0.0.0.0/0 53 udp 0.0.0.0/0 -1 icmp 0.0.0.0/0 I am able to ping the public DNS server of the machine and i can connect to it using Windows Remote Desktop connection. However, when i put in my web browser the public DNS server, it fails to connect. Morever, I used filezilla and putty (and in both I loaded the private key .pem) but i receive connection timed out. I disabled the firewall on both my pc and the instance (which I entered using Remote desktop connection). Can you please tell me what I am missing?

    Read the article

  • Postfix - How to configure to send these emails?

    - by Jon
    I want my mailserver to send mail from my local application "from" any user supplied email address "to" my own address, say "[email protected]". The MX records for "mysite.com" actually point to a different server, even though the outgoing mainserver is running with mydomain set as "mysite.com". Perhaps this is part of the problem? postfix is currently causing a SMTPRecipientsRefused error within the python application. Can anyone point me to what to change in the configuration? Thanks postconf -n: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no config_directory = /etc/postfix inet_interfaces = all mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydestination = mysite.com, localhost.com, , localhost, * myhostname = mysite.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname readme_directory = no recipient_delimiter = + relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes

    Read the article

  • Which OS should I chose for my VPS?

    - by Camran
    I am about to order a virtual private server now, and have no experience in any Linux OS whatsoever. I am a fast learner however... My VPS provider provides these OS: Ubuntu 8.04 LTS Ubuntu 8.04 LTS 64-bit Ubuntu 9.10 Ubuntu 9.10 64-bit Debian 5.0 Gentoo Gentoo 64-bit Ubuntu 8.04 LTS + Ruby on rails I don't know what these are, however I have heard about Ubuntu alot, and know there is alot of information about it on the Internet. Will it make any difference which one I chose? I plan on running a classifieds website, which uses PHP, MySql, Java (for Solr) and the usual standard stuff (HTML, javascript...). Which should I chose? And what is the next step after chosing one? Thanks

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • How do I implement movement in a WPF Adventure game?

    - by ZeroPhase
    I'm working on making a short WPF adventure game. The only major hurdle I have right now is how to animate objects on the screen correctly. I've experimented with DoubleAnimation and ThicknessAnimation both enable movement of the character, but the speed is a bit erratic. The objects I'm trying to move around are labels in a grid, I'm checking the mouse's position in terms of the canvas I have the grid in. Does anyone have any suggestions for coding the movement, while still allowing mouse clicks to pick up items when needed? It would be nice if I could continue using the Visual Studio GUI Editor. By the way, I'm fine with scrapping labels in a grid for a more ideal object to manipulate. Here's my movement code: ThicknessAnimation ta = new ThicknessAnimation(); The event handling movement: private void Hansel_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { ta.FillBehavior = FillBehavior.HoldEnd; ta.From = Hansel.Margin; double newX = Mouse.GetPosition(PlayArea).X; double newY = Mouse.GetPosition(PlayArea).Y; if (newX < Convert.ToDouble(Hansel.Margin.Left)) { //newX = -1 * newX; ta.To = new Thickness(0, newY, newX, 0); } else if (newY < Convert.ToDouble(Hansel.Margin.Top)) { newY = -1 * newY; } else { ta.To = new Thickness(newX, newY, 0, 0); } ta.Duration = new Duration(TimeSpan.FromSeconds(2)); Hansel.BeginAnimation(Grid.MarginProperty, ta); } ScreenShot with annotations: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps9d4a33cc.png ScreenShot with example movement: http://i1118.photobucket.com/albums/k608/sealclubberr/clickToMove_zps51f2359f.jpg

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Sonicwall Enhanced With One-To-One NAT, Firewall Blocking Everything

    - by Justin
    Hello, just migrated from a Sonicwall TZ180 (Standard) to a Sonicwall TZ200 (Enhanced). Everything is working except the firewall rules are blocking everything. All hosts are online, and being assigned correct ip addresses. I can browse the internet on the hosts. I am using one-to-one NAT translating public ip addresses to private. 64.87.28.98 -> 192.168.1.2 64.87.28.99 -> 192.168.1.3 etc First order of business is to get ping working. My rule is in the new firewall is (FROM WAN to LAN): SOURCE DESTINATION SERVICE ACTION USERS ANY 192.168.1.2-192.168.1.6 PING ALLOW ALL This should be working, but not. I even tried changing the destination to the public ip addresses, but still no luck. SOURCE DESTINATION SERVICE ACTION USERS ANY 64.87.28.98-64.87.28.106 PING ALLOW ALL Any ideas what I am doing wrong?

    Read the article

  • Can I disable Pam Loginuid? Can I find out options used to configure kernel?

    - by dunxd
    I am getting a lot of the following types of error in my secure log on a CentOS 5.4 server: crond[10445]: pam_loginuid(crond:session): set_loginuid failed opening loginuid sshd[10473]: pam_loginuid(sshd:session): set_loginuid failed opening loginuid I've seen discussion of this being caused when using a non-standard kernel without the correct CONFIG_AUDIT and CONFIG_AUDITSYSCALL options set. Where this is the case, it is advised to comment out some lines in the pam.d config files. I am running a Virtual Private Server where I need to use the kernel provided by the supplier. Is there a way to find out what options they used to configure the kernel? I want to verify if the above is the cause. If this turns out not to be the cause, what are the risk of disabling pam_loginuid for crond and sshd?

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

  • Confused about javascript module pattern implementation

    - by Damon
    I have a class written on a project I'm working on that I've been told is using the module pattern, but it's doing things a little differently than the examples I've seen. It basically takes this form: (function ($, document, window, undefined) { var module = { foo : bar, aMethod : function (arg) { className.bMethod(arg); }, bMethod : function (arg) { console.log('spoons'); } }; window.ajaxTable = ajaxTable; })(jQuery, document, window); I get what's going on here. But I'm not sure how this relates to most of the definitions I've seen of the module (or revealing?) module pattern. like this one from briancray var module = (function () { // private variables and functions var foo = 'bar'; // constructor var module = function () { }; // prototype module.prototype = { constructor: module, something: function () { } }; // return module return module; })(); var my_module = new module(); Is the first example basically like the second except everything is in the constructor? I'm just wrapping my head around patterns and the little things at the beginnings and endings always make me not sure what I should be doing.

    Read the article

  • Thinkpad brightness steps error using FN+Home/End

    - by petermolnar
    I've met the following problem: normally my T400 (Lenovo Thinkpad) has 16 steps of brightness, and Windows utilizes it correctly. After a fresh install & minor tweaks Mint 12 (which is based on 11.10 Ubuntu) I only had 6 steps which was way to few. Listing /sys/class/backlight showed 3 entried. I removed the acpi-tools package, one of the disapperared - and I now have 10 steps! Therefore I think if I can reduce the entries to 1 I'm going to have 16 steps, since the stepping will be 1 instead of 2 (or 3). /sys/class/backlight/ intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight thinkpad_screen -> ../../devices/virtual/backlight/thinkpad_screen The problem is that I'm unable to trace back what are the configs / daemons / kernel options triggers these two. More strangely, I discovered a strange behaviour. I monitored watch -n1 "cat /sys/class/backlight/thinkpad_screen/actual_brightness" and watch -n1 "cat /sys/class/backlight/intel_backlight/actual_brightness" while changing the brightness with FN+home/end combinations from max to min. The outcome is the following: brighness intel thinkpad --------- ----- -------- MAX 2408475 7 | 1955115 5 | 1435640 3 | 1246740 1 | 1086175 0 | 1010615 6 | 859495 4 | 689485 2 v 481695 0 MIN 217235 0 brighness intel thinkpad --------- ----- -------- MIN 217235 0 | 481695 2 | 689485 4 | 859495 6 | 1010615 7 | 1086175 1 | 1246740 3 | 1435640 5 v 1955115 7 MAX 2408475 0 When stepping from MIN to MAX, there's no difference between the last 2 steps. Also, the OSD icon (Cinnamon desktop, default theme) goes from full to min in 4 steps and from full to min once again in 4 steps. So... it seems that the intel entry is working correctly, showing correct values. The thinkpad entry however twists the things and even showing incorrect values. Does anyone have any idea how to get rid of the thinkpad entry? System data: Linux Mint 12 3.0.0-16 kernel Lenovo ThinkPad T400 Cinnamon 1.4 desktop For any additional info, please tell me what do you need. EDIT I'm sorry, I forgot to mention, I added acpi_backlight=vendor to GRUB cmdline as well, this is the result of the semi-better working than the default.

    Read the article

  • A website hosted on the 1.0.0.0/8 subnet, somewhere on the Internet?

    - by Dave Markle
    Background I'm attempting to demonstrate, using a real-world example, of why someone would not want to configure their internal network on the 1.0.0.0/8 subnet. Obviously it's because this is not designated as private address space. As of 2010, ARIN has apparently allocated 1.0.0.0/8 to APNIC (the Asia-Pacific NIC), who seems to have begun assigning addresses in that subnet, though not in 1.1.0.0/16, 1.0.0.0/16, and others (because these addresses are so polluted by bad network configurations all around the Internet). My Question My question is this: I'd like to find a website that responds on this subnet somewhere and use it as a counter-example, demonstrating to a non-technical user its inaccessibility from an internal network configured on 1.0.0.0/8. Other than writing a program to sniff all ~16 million hosts, looking for a response on port 80, does anyone know of a directory I can use, or even better yet, does anyone know of a site that's configured on this subnet? WHOIS seems to be too general of a search for me at this point...

    Read the article

  • Hiding monitor from windows, working with it from my app only [closed]

    - by Mikhail
    I need to use a monitor as a "private" device for my special application, I want to use it as a flashlight of a sort and draw special patterns on it in full screen. I don't want this monitor to be recognized by OS (Windows 7) as a monitor. I.e. user should not be able to move mouse to that monitor, or change its resolution, or run screensaver on it or whatever. But I want to be able to interact with it from my application. Monitor is plugged using an HDMI cable to a video card (most probably nVidia). What is the simplest way to do this? All solutions are appreciated, including purchasing additional adapters or simple video cards, or any other special devices.

    Read the article

  • Oracle Enterprise Manager 12c R3 introduces advancements in cloud lifecycle and operations management

    - by Anand Akela
    Oracle Enterprise Manager 12c Release 3 (R3) was announced ( Press Release ) earlier today. It is now available for download at  OTN . This latest release features improvements in several areas, including: Improvements to Private Cloud and Engineered Systems Management Expanded Middleware and Application Management Capabilities Efficiency Gains for Enterprise manager Users in EM’s Enterprise-Ready Framework You can learn more about what's new in the Oracle Enterprise Manager 12c R3 in the Enterprise Manager 12c documentation . You will see more blogs and details about the new features during the next few weeks. Please let us what On July 18th, you can join us at a webcast to hear Thomas Kurian, EVP of Product Development on what Oracle Engineering has achieved with Oracle Enterprise Manager 12c Release 3 to address these challenges. Later, during this webcast, Oracle experts will discuss the latest capabilities in Oracle Enterprise Manager 12c Release 3 for cloud lifecycle and operations management. The presentation will be followed by a live Q&A session with Oracle experts. You can also join us online on Twitter to get your specific questions answered. Please use hash tag #em12c to join the conversation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Register Now for the Webcast! Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Take care to unhook Anonymous Delegates

    - by David Vallens
    Anonymous delegates are great, they elimiante the need for lots of small classes that just pass values around, however care needs to be taken when using them, as they are not automatically unhooked when the function you created them in returns. In fact after it returns there is no way to unhook them. Consider the following code.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { SimpleEventSource t = new SimpleEventSource(); t.FireEvent(); FunctionWithAnonymousDelegate(t); t.FireEvent(); } private static void FunctionWithAnonymousDelegate(SimpleEventSource t) { t.MyEvent += delegate(object sender, EventArgs args) { Debug.WriteLine("Anonymous delegate called"); }; t.FireEvent(); } } public class SimpleEventSource { public event EventHandler MyEvent; public void FireEvent() { if (MyEvent == null) { Debug.WriteLine("Attempting to fire event - but no ones listening"); } else { Debug.WriteLine("Firing event"); MyEvent(this, EventArgs.Empty); } } } } If you expected the anonymous delegates do die with the function that created it then you would expect the output Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledAttempting to fire event - but no ones listening However what you actually get is Attempting to fire event - but no ones listeningFiring eventAnonymous delegate calledFiring eventAnonymous delegate called In my example the issue is just slowing things down, but if your delegate modifies objects, then you could end up with dificult to diagnose bugs. A solution to this problem is to unhook the delegate within the function var myDelegate = delegate(){Console.WriteLine("I did it!");}; MyEvent += myDelegate; // .... later MyEvent -= myDelegate;

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >