Search Results

Search found 915 results on 37 pages for 'restrictions'.

Page 19/37 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • I am looking to create realistic car movement using vectors

    - by bobthemac
    I have goggled how to do this and found this http://www.helixsoft.nl/articles/circle/sincos.htm I have had a go at it but most of the functions that were showed didn't work I just got errors because they didn't exist. I have looked at the cos and sin functions but don't understand how to use them or how to get the car movement working correctly using vectors. I have no code because I am not sure what to do sorry. Any help is appreciated. EDIT: I have restrictions that I must use the TL engine for my game, I am not allowed to add any sort of physics engine. It must be programmed in c++. Here is a sample of what i got from trying to follow what was done in the link I provided. if(myEngine->KeyHeld(Key_W)) { length += carSpeedIncrement; } if(myEngine->KeyHeld(Key_S)) { length -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_A)) { angle -= carSpeedIncrement; } if(myEngine->KeyHeld(Key_D)) { angle += carSpeedIncrement; } carVolocityX = cos(angle); carVolocityZ = cos(angle); car->MoveX(carVolocityX * frameTime); car->MoveZ(carVolocityZ * frameTime);

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • Providing SSH tunnling, what to think about when configuring Ubuntu Server

    - by bigbadonk420
    Recently I've considered, mostly as a pet project, to set up accounts for a closed group of users via SSH to my box with the purpose of SSH tunnling things like web traffic -- some of it for friends that live abroad and perhaps also to help some people bypass national censorship. There's some things I imagine that I need to do, such as: Disabling shell access by setting the shell to /bin/false or similar. Get some software that can track bandwidth usage on a per-user basis historically Make sure that each user can only use a certain amount of bandwidth. The reason I'm posting here to begin with is to look around and get some pointers regarding what kind of things I should read up on, as well as hearing if there are any software recommendations for doing what I'm trying to do. I already know a bit since I've actually gotten SSH tunnling up and running already, I just don't feel like letting it loose to other people without restrictions and some basic monitoring. I'm primarily trying to learn here, so if you think this is a Very Bad Idea (or if you have a better idea on how to do this) then by all means say so, but please include some information on how to do it :) (I'm also open to trying things like OpenVPN but it seems really hard to set up, also I've heard SSH more often works in locked down environments)

    Read the article

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Make the most of yourself and your team during Ramadan

    - by swalker
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} To remain competitive, you and your staff need training. The forthcoming Ramadan period adds even more restrictions to those like project deadlines, travel approval and increasing workloads. With Training On Demand these interferences that prevent the attendance of training courses disappear: you and your staff can learn any place, any time. You can find more information here. For assistance with bookings contact your local Oracle University Service Desk. Stay Connected to Oracle University: LinkedIn OracleMix Twitter Facebook Google+

    Read the article

  • All in a Day's Work: Unblocking Multiple Downloaded Files with a Single Command

    - by Sam Abraham
    Files downloaded using Internet Explorer retain Internet Zone permission level and hence are “Blocked” by default on Windows 7 machines. Honestly, while an added overhead for developers; I really appreciate this feature as it provides a good protection layer for casual web users. My workaround is to simply unblock the downloaded zip file (if download was a zip file) which, in turn, unblocks the files stored within. Today however, I was left with a situation where I had to “Open” and “Copy” the content rather than “Save” a zip file. That of course left me with a few dozen files I have to manually unblock. A few minutes of internet search lead me to the link below which worked like a charm: 1-Download streams.exe from SystInternals - http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx 2-Go to command prompt (cmd.exe) 3-Navigate to where you have streams.exe installed 4-Use command line switches: streams.exe –s –d “<folder path>” This removed the Internet Zone restrictions from all files under “<folder path>” and its subfolders as well. [Deleted :Zone.Identifier:$DATA] References: http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/806f0104-1caa-4a66-b504-7a681d1ccb33/

    Read the article

  • WebCenter 11.1.1.8 Certified with E-Business Suite 12.2

    - by Steven Chan (Oracle Development)
    Oracle WebCenter Suite is an integrated suite of Fusion Middleware 11gR1 tools used to create web sites and portals using service-oriented architecture (SOA).  Applications adapters are also available. WebCenter Portal 11.1.1.8 is now certified with Oracle E-Business Suite Release 12.2.  This complements our existing certifications of WebCenter Portal 11.1.1.8 with EBS 12.0 and 12.1.  WebCenter Portal 11.1.1.8 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.8.0, also known as FMW 11gR1 Patchset 7.  Certified Platforms Oracle WebCenter Portal is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. For information on operating systems supported by Oracle WebLogic Server 11g and Oracle WebCenter Portal, refer to the 'Oracle Fusion Middleware on WebLogic Server - System Certification' in the Oracle Fusion Middleware 11g Release 1 (11.1.1.x) Certification Matrix. Integration with Oracle WebCenter Portal involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component. Migrating to Oracle WebCenterIf you're currently using Oracle Portal, you should be aware that Portal is now in maintenance mode.  Updates with bug fixes will continue to be produced, but you should consider migrating to Oracle WebCenter for ongoing new features. References Using WebCenter 11.1.1 with Oracle E-Business Suite Release 12.2 (Note 1332645.1) WebCenter Portal 11g Release 1 (11.1.1.8) Documentation Related Articles Oracle E-Business Suite 12.2 Now Available WebCenter Portal 11.1.1.8 Certified with E-Business Suite 12

    Read the article

  • Weird behavior when using pointers [migrated]

    - by Kinan Al Sarmini
    When I run this code on MS VS C++ 2010: #include <iostream> int main() { const int a = 10; const int *b = &a; int *c = (int *)b; *c = 10000; std::cout << c << " " << &a << std::endl; std::cout << *c << " " << a << " " << *(&a) << std::endl; return 0; } The output is: 0037F784 0037F784 10000 10 10 The motivation for writing that code was this sentence from "The C++ Programming Language" by Stroustrup: "It is possible to explicitly remove the restrictions on a pointer to const by explicit type conversion". I know that trying to modify a constant is conceptually wrong, but I find this result quite weird. Can anyone explain the reason behind it?

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • Which is the best image hosting site for hosting images for website? [closed]

    - by rahul dagli
    I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32 (Mounted Drives) ???

    - by Microkernel
    Hi guys, I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two windows-xp machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is = Shared folders on Ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (even if I provide admin login details!!!). I changed workgroup value to the domain_name in office laptop, but still the problem persists... Here is the smdb.conf I am using... [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup Was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail Thanks in advance Regards, Microkernel

    Read the article

  • How to implement behavior in a component-based game architecture?

    - by ghostonline
    I am starting to implement player and enemy AI in a game, but I am confused about how to best implement this in a component-based game architecture. Say I have a following player character that can be stationary, running and swinging a sword. A player can transit to the swing sword state from both the stationary and running state, but then the swing must be completed before the player can resume standing or running around. During the swing, the player cannot walk around. As I see it, I have two implementation approaches: Create a single AI-component containing all player logic (either decoupled from the actual component or embedded as a PlayerAIComponent). I can easily how to enforce the state restrictions without creating coupling between individual components making up the player entity. However, the AI-component cannot be broken up. If I have, for example, an enemy that can only stand and walk around or only walks around and occasionally swing a sword, I have to create new AI-components. Break the behavior up in components, each identifying a specific state. I then get a StandComponent, WalkComponent and SwingComponent. To enforce the transition rules, I have to couple each component. SwingComponent must disable StandComponent and WalkComponent for the duration of the swing. When I have an enemy that only stands around, swinging a sword occasionally, I have to make sure SwingComponent only disables WalkComponent if it is present. Although this allows for better mix-and-matching components, it can lead to a maintainability nightmare as each time a dependency is added, the existing components must be updated to play nicely with the new requirements the dependency places on the character. The ideal situation would be that a designer can build new enemies/players by dragging components into a container, without having to touch a single line of engine or script code. Although I am not sure script coding can be avoided, I want to keep it as simple as possible. Summing it all up: Should I lob all AI logic into one component or break up each logic state into separate components to create entity variants more easily?

    Read the article

  • Why does Android make good coding so difficult?

    - by metacircle
    my daily work is writing tools in C#/WPF. After over more than 1 year on the job now, I came to love MVVM, IoC Containers, XAML (and more). It's pure fun to write code, since simple, maintainable and extendable code just comes naturally when you follow a few basic patterns. In my free time I really want to write some apps, mainly for my own personal use. I want to write apps for fun and not to make money or anything, that being said, paying an annual fee to be allowed to use my own apps on my own device is a total no-go for me. So I am not able to code for Windows Phone and am also not able to use Xamarin on Android (which is sad since Visual Studio + Resharper is programmers heaven). So I am stuck with Android "classic" Java development. Everytime I sit down at home to create an app, or improve some of the code I have already written I get annoyed very quick because getting good, decoupled code is just so hard to accomplish. It feels like everything you have to do in Android to create a good architecture is a workaround instead of being the way things are meant to be. Writing the UI in xml is fine, but everything else is one big code mess. Even all the tutorials do all their coding in the code behind. For 'hello world' this is fine, but for anything bigger this gets messy very very quick. This is where the fun for me ends. It's just no fun anymore because I just spend 90% of my time refactoring and thinking of workarounds how to make my code more maintainable with all the restrictions Android puts on me. Am I missing a crucial part or is this just the way Android is meant to be? Do you have any suggestions how to learn 'the fun way' of Android programming.

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Guidelines for creating referentially transparent callables

    - by max
    In some cases, I want to use referentially transparent callables while coding in Python. My goals are to help with handling concurrency, memoization, unit testing, and verification of code correctness. I want to write down clear rules for myself and other developers to follow that would ensure referential transparency. I don't mind that Python won't enforce any rules - we trust ourselves to follow them. Note that we never modify functions or methods in place (i.e., by hacking into the bytecode). Would the following make sense? A callable object c of class C will be referentially transparent if: Whenever the returned value of c(...) depends on any instance attributes, global variables, or disk files, such attributes, variables, and files must not change for the duration of the program execution; the only exception is that instance attributes may be changed during instance initialization. When c(...) is executed, no modifications to the program state occur that may affect the behavior of any object accessed through its "public interface" (as defined by us). If we don't put any restrictions on what "public interface" includes, then rule #2 becomes: When c(...) is executed, no objects are modified that are visible outside the scope of c.__call__. Note: I unsuccessfully tried to ask this question on SO, but I'm hoping it's more appropriate to this site.

    Read the article

  • Installing a new ASP.NET 4.0 site on a Windows 2008 server.

    - by TATWORTH
    I have been specifically requested to blog about getting an ASP.NET 4.0 site working on a Windows 2008 server that has never run a 4.0 web site before. Make sure the 4.0 framework is installed on the server! Patch it will ALL the security patches have been applied. ((for a live server, make sure that you tested the patches on your development server first) You will find the HTTP Log status codes at http://support.microsoft.com/kb/943891 - they are very important in understandign the IIS logs) After installing, turn on 4.0, by doing the following: Start the Internet Information Services (IIS Manager) Select the server node in the connections pane. (this is the node above Application Pools, FTP Sites and Server Farms) Double click the ISAPI and CGI Restrictions item in the centre pane You should see 1 or 2 ASP.NET v4.0.30319 entries, select Enable in the Actions pane for all of them. ASP.NET 4.0 should now run! Remeber after creating your new 4.0 ASP.NET site. select the Sites node and find out the Id of it. By default, the IIS logs are at C:\inetpub\logs\LogFiles and if your site is say 21, then the logs will be created in the W3SVC21 sub-directory. The key point about using these logs is that in the event of an error when trying to start the site for the first time, the log will contain the status code and the sub-code. By having the full code and sub-code, set up issues can be resolved in minutes instead of hours.

    Read the article

  • What online games would let me practice AI development?

    - by Myn
    I am working on a project experimenting with Artificial Intelligence design methodologies for online world avatars. Online world here is quite open to interpretation; Second Life is just as applicable as Counter Strike, for example. To carry out these experiments, I must first develop an intelligent agent for the world in question. However, I am honestly quite stuck as to which game I could use for this. My preference was to develop an intelligent "bot" to play an MMORPG, but the legal restrictions of such games prevent me. Likewise with most FPS games the use of an intelligent agent in place of a human player is considered cheating. The alternative, of course, is to create an NPC bot; an intelligent agent that populates the world alongside the player(s) rather than replacing a particular player. However, I'm struggling to find a game that would enable me to create an intelligent opponent either. I suppose the main requirements would be a game allows a third party program to use the function calls usually utilised by players and read feedback on the state of the world. Quake III and Unreal Tournament have been suggested before, but they have already been the subject of work on this research project. Short of writing my own online game from scratch, what games would allow me, through middleware, an API, or otherwise, to create either an artificially intelligent player or a bot?

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • Thunderbird: "Could not initialize the application's security component" [closed]

    - by user unknown
    In Thunderbird, on startup, I get the error message: "Could not initialize the application's security component" The message continues to check permissions of the profile, and free disk space. df -h shows, that I have 19G free disk space. find . -not -perm -644 -not -perm -600 -ls shows: No file without rw-permissions for me. Before the error occured, thunderbird worked well. But I changed my main mail-account. I had two, let's call them A and B, and used mainly A, but now I wanted to deaktivate it, and receive and send automatically via the second. I Had problems moving the filters from inbox A to inbox B (missing copy-functionality). In the web, I found (mollazine) hints, to move key3.db, cert8.db and secmode.db out of the way, but it didn't work for me. Another hint was to uninstall Quickcam(?. sic!), but I don't have Quickcam. A third to recreate the profile, but I have subdirectories, filters, addressbook, groups - mails back to the year 2003. I don't want to risk the loss of data. The whole errormessage is: Could not initialize the application's security component. The most likely cause is problems with files in your application's profile directory. Please check that this directory has no read/write restrictions and your hard disk is not full or close to full. It is recommended that you exit the application and fix the problem. If you continue to use this session, you might see incorrect application behaviour when accessing security features. When I open the error-console, it is empty.

    Read the article

  • Game with changing logic

    - by rsprat
    I'm planing to develop a puzzle like mobile game (android/ios) with a different logic for each puzzle. Say, for example one puzzle could be a Rubik's cube and another one a ball maze. Many more new puzzles will appear during the life of the game, and I want the users to be able to play those new puzzles. The standard way for managing this would be through application updates. Each time a new puzzle or bunch of puzzles appear, create a new update for the app that the user can download. However, I would like to do it in a more transparent way. When a new puzzle appears, the basic info of the puzzle would be displayed in the app menu, and the user would be able to play it by just clicking it. What comes to my mind is that the app would automatically download a .dll or .jar and inject it in the application at runtime. Is that even possible? Are there any restrictions from the OS? Is there a better way for solving it? Thanks alot

    Read the article

  • ASP.Net MVC 3: multiple versions of the site without changing of URL, is it possible?

    - by Seacat
    Our website is written in ASP.NET MVC 3 and we want to change a feature in the core functionality of the site. The problem is not every client can be moved to this new version/format (because of some technical inner restrictions) so it means that we need to keep 2 versions of the same functionality (backend and frontend) simultaneously. We don't want our clients to worry about URLs, so the ideal solution would be keep the same URL but redirect clients to the different versions. The information about clients is stored in database. So the moment when user (client) logs in we know which version of site we should show. I'm thinking about routing and areas but I'm not sure if it's possible for example to have 2 areas with the different versions of the same application. Or is it possible to load the assemblies on the fly? After user is logged in we can decide if (s)he should be redirected to the new or old version. As far as all the clients have been moved to the new version we don't need this system more. How can I do this in ASP.NET MVC?

    Read the article

  • C# calendar needs Business Logic for real-time reminders? [on hold]

    - by lazfish
    I am not a super experienced C# user, though I have some experience in .Net and VB.Net. Just got this new job and my first assignment is a mission critical part of the business. It is pretty important I get it right so I was hoping for some sage advice. We have created a calendar using jQuery, C#, .Net & SQL Server 08. The calendar works but now we are wanting to add email, SMS and voice-call reminders. We are a small company and can do whatever we need to do with no restrictions on our IT environment. I have some base-line experience with Unix servers but would prefer to stay in the Micro$oft universe if it is prudent to do so. I know how to add the API calls and services to initiate these reminders (using built in email services for email and Twilio.com for SMS). I am asking for advice about how to approach a reliable and timely listener service that knows when to call the service or API for the reminder before an appointment. EG. SMS: "You have a conference call in 30 minutes." I have done some research, but it is hard to know what is a proven reliable approach. What I am looking for is an experienced opinion on a good (or the best) strategy for implementing a solution that will reliably listen for appointments and dispatch the reminders when needed.

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >