Search Results

Search found 8286 results on 332 pages for 'defined'.

Page 135/332 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How can I ensure my Collada model fits on an iPhone screen?

    - by rakeshNS
    Hi I am new to game development. I see many examples and tried myself like displaying triangle, cube etc. Now I am looking to render a Collada object. So I created a Collada object using Google Sketch up and trying to render that now. But the thing I am not understanding is, in all examples the vertices are between -1.0 and +1.0 values. But when I looked into that Collada file, the vertices were ranging from -30.0 to 90.0. I know any vertices greater than 1.0 will not display on iPhone. So can you pleas tell my the secret behind converting Object coordinate to normalized vector coordinate? My previous triangle defined as struct Vertex{ float Position[3]; float Color[4]; }; const Vertex Vertices[] = { {{-0.5, -0.866}, {1, 1, 0.5f, 1}}, {{0.5, -0.866}, {1, 1, 0.5, 1}}, {{0, 1}, {1, 1, 0.5, 1}}, {{-0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0.5, -0.866}, {0.5f, 0.5f, 0.5f}}, {{0, -0.4f}, {0.5f, 0.5f, 0.5f}}, }; And now triangle from collada is const Vertex Vertices[] = { {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5f, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 176.3763924, 0.0000000}, {1, 1, 0.5, 1}}, {{-20.2205588, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, {{39.4202092, 90.1263924, 0.0000000}, {1, 1, 0.5, 1}}, };

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • Can higher-order functions in FP be interpreted as some kind of dependency injection?

    - by Giorgio
    According to this article, in object-oriented programming / design dependency injection involves a dependent consumer, a declaration of a component's dependencies, defined as interface contracts, an injector that creates instances of classes that implement a given dependency interface on request. Let us now consider a higher-order function in a functional programming language, e.g. the Haskell function filter :: (a -> Bool) -> [a] -> [a] from Data.List. This function transforms a list into another list and, in order to perform its job, it uses (consumes) an external predicate function that must be provided by its caller, e.g. the expression filter (\x -> (mod x 2) == 0) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] selects all even numbers from the input list. But isn't this construction very similar to the pattern illustrated above, where the filter function is the dependent consumer, the signature (a -> Bool) of the function argument is the interface contract, the expression that uses the higher-order is the injector that, in this particular case, injects the implementation (\x -> (mod x 2) == 0) of the contract. More in general, can one relate higher-order functions and their usage pattern in functional programming to the dependency injection pattern in object-oriented languages? Or in the inverse direction, can dependency injection be compared to using some kind of higher-order function?

    Read the article

  • Building a template engine - starting point

    - by Anirudh
    We're building a Django-based project with a template component. This component will be separate from the project as such and can be Django/Python, Node, Java or whatever works. The template has to be rendered into HTML. The templates will contain references to objects with properties that are defined in the DB, say, a Bus. For eg, it could be something like [object type="vehicle" weight="heavy"] and it would have to pull a random object from the DB fulfilling the criteria : type="vehicle" weight="heavy" (bus/truck/jet) and then substitute that tag with an image, say, of a Bus. Also it would have to be able to handle some processing. Eg: What is [X type="integer" lte="10"] + [Y type="integer" lte="10"] [option X+Y correct_ans="true"] [option X-Y correct_ans="false"] [option X+y+1 correct_ans="false"] The engine would be expected to fill in a random integer value <= 10 for X and Y and show radioboxes for each of the options. Would also have to store the fact that the first option is the correct answer. Does it to make sense to write something from the scratch? Or is it better to use an existing templating system (like Django's own templating system) as a starting point? Any suggestions on how I can approach this?

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • The MDM Journey: From the Customer Perspective

    - by Mala Narasimharajan
    Master Data Management is more than just about a single version of  the truth or providing a 360 degree view of the customer.  It spans multiple domains ranging from customers to suppliers to products and beyond.  MDM is pivotal to providing a solid customer experience - one that results in repeat business, continued loyalty and last but not least - high customer satisfaction.  Customer experience is not only defined as accurate information about the customer for the enterprise, but also presenting the customer with the right information about products, orders, product availability, etc.   Let's take a look at a couple of customer use cases with Oracle MDM. Below is a picture from a recent customer panel: Oracle MDM is a key platform for increasing upsell/cross-sell opportunities, improve targeting of customers and uncover new sales opportunies, reduce inaccuracies in mailing marketing materials to prospects, as well as to tap into and uncover the full value of a customer across business units more accurately.  A leading investment and private bank leverages Oracle MDM to do a better job of identifying clients, their levels of investment as well as consistently manage them through a series of areas such as credit, risk, new accounts, etc. Ultimately, they are looking to understand client investments and touchpoints across the company's offerings.  Another use case for Oracle MDM is with a major financial and insurance services company with clients worldwide, looking to resolve customer data inaccuracies and client information stored differently across mulitiple systems.  For more information on Oracle Master Data Management, click here.  

    Read the article

  • Keyring no longer prompts for password when SSH-ing

    - by Lie Ryan
    I remember that I used to be able to do ssh [email protected] and have a prompt asks me for a password to unlock the keyring for the whole GNOME session so subsequent ssh wouldn't need to enter the keyring password any longer (not quite sure if this is in Ubuntu or other distro). But nowadays doing ssh [email protected] would ask me, in the terminal, my keyring password every single time; which defeats the purpose of using SSH keys. I checked $ cat /etc/pam.d/lightdm | grep keyring auth optional pam_gnome_keyring.so session optional pam_gnome_keyring.so auto_start which looks fine, and $ pgrep keyring 1784 gnome-keyring-d so the keyring daemon is alive. I finally found that SSH_AUTH_SOCK variable (and GNOME_KEYRING_CONTROL and GPG_AGENT_INFO and GNOME_KEYRING_PID) are not being set properly. What is the proper way to set this variable and why aren't they being set in my environment (i.e. shouldn't they be set in default install)? I guess I can set it in .bashrc, but then the variables would only be defined in bash session, while that is fine for ssh, I believe the other environment variables are necessary for GUI apps to use keyring.

    Read the article

  • BusEnum2 and a Minor Bug Fix

    - by Kate Moss' Open Space
    The default root bus driver, BusEnum, enumerate and active drivers one by one in synchronized manner. It is not only slowing the boot time but in the even if any of driver's init function (XXX_init) get hanged, the whole system won't boot at all. There is a sample of enhanced root bus driver, BusEnum2, on the http://msdn.microsoft.com/en-us/library/dd187254.aspx The page provides the sample code and the detail explanation of the design concept. With multi-threaded BusEnum2 on CE7 with SMP enabled system, the scalability is even more significant. Since you have more than one processor and it can load drivers in parallel! Everything looks good so far, except to there is a small bug in the sample code. Fortunately, it is easy to fix. But hard to trace if you ever enc outer it! The BUSENUM2 flag only defined in BUSENUM2\BUSDEF\sources but not in BUSENUM2\BUSENUM\sources. The DeviceFolder is implemented in BUSENUM2\BUSDEF but the instance is created in BUSENUM2\BUSENUM\busenum.cpp, so the result is it allocates less memory than actual need.   Add   CDEFINES=$(CDEFINES) -DBUSENUM2   into BUSENUM2\BUSENUM\sources and the problem fixed!

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Java SE 8 (with JavaFX) Developer Preview Release for ARM

    - by Roger Brinkley
    In an effort to get ARM developers testing Java SE 8 before the scheduled release later this year a Java SE 8 Developer Preview Release for ARM has been made available. This release has been tested on the Raspberry PI but should work on other ARM platforms. In addition to the new Java SE features, this release provides specific support of hard float GPU on the Raspberry PI. The support for hard float GPU has been anticipated by a number of developers. Additionally, this release includes support of an optimized JavaFX. Specific configurations of JDK 8 on ARM are defined below: Java FX is supported on ARM architecture v6/7 (hard float) Supported platforms without Java FX: ARM architecture v6/7 (hard float) ARM architecture v7 (VFP, little endian) ARM architecture v5 (soft float, little endian) Linux x86 The download page includes setup instructions for a Raspberry PI device as well as demos and samples. Developers are also encouraged to try their own applications as well and to share their stories via the JavaFX or Project Feedback Forums.  If you've got a Raspberry PI or other ARM devices it's time to get started with Java SE 8 Developer Preview release.

    Read the article

  • Is there a cheaper non-express non-student, non-msdn version of Visual Studio 2010 that supports plugins in the US than the $710 Professional Edition?

    - by Justin Dearing
    I've never actually purchased a copy of Visual Studio myself. SharpDevelop and Express edition have always been good enough for my personal use, and my employers always furnished me with the IDEs I needed to serve them. However, I'm thinking of actually paying for a copy for my personal laptop. I need this mainly so I can open solutions that contain web projects. So my question is: Is there an edition cheaper than the $710 Pro edition on Amazon that will do what I need: http://www.amazon.com/Microsoft-C5E-00521-Visual-Studio-Professional/dp/B0038KTO8S/ref=sr_1_2?ie=UTF8&qid=1287456230&sr=8-2 ? What I need is defined as: Open up a solution with C#, Web App, VB.NET, and Web Projects. Install addins like resharper, testdriven.net, etc, SCM plugins, etc. Some level of db project support. At least to be able to open a dbproj. I only need that for SCM hooks. SSMS and SQLCMD are good enough for actually editing databases. Ability to install F#, IronPython, IronRuby etc. Now naturally I'm a fairly intelligent resourceful person so I realize I can get Visual Studio in a questionable manner. Thats not what I'm looking to do. I want a legal copy, I don't want a student copy, or an MSDN copy. I want a real copy, I just want to make sure I get the cheapest edition that serves my needs.

    Read the article

  • What is logical cohesion, and why is it bad or undesirable?

    - by Matt Fenwick
    From the c2wiki page on coupling & cohesion: Cohesion (interdependency within module) strength/level names : (from worse to better, high cohesion is good) Coincidental Cohesion : (Worst) Module elements are unrelated Logical Cohesion : Elements perform similar activities as selected from outside module, i.e. by a flag that selects operation to perform (see also CommandObject). i.e. body of function is one huge if-else/switch on operation flag Temporal Cohesion : operations related only by general time performed (i.e. initialization() or FatalErrorShutdown?()) Procedural Cohesion : Elements involved in different but sequential activities, each on different data (usually could be trivially split into multiple modules along linear sequence boundaries) Communicational Cohesion : unrelated operations except need same data or input Sequential Cohesion : operations on same data in significant order; output from one function is input to next (pipeline) Informational Cohesion: a module performs a number of actions, each with its own entry point, with independent code for each action, all performed on the same data structure. Essentially an implementation of an abstract data type. i.e. define structure of sales_region_table and its operators: init_table(), update_table(), print_table() Functional Cohesion : all elements contribute to a single, well-defined task, i.e. a function that performs exactly one operation get_engine_temperature(), add_sales_tax() (emphasis mine). I don't fully understand the definition of logical cohesion. My questions are: what is logical cohesion? Why does it get such a bad rap (2nd worst kind of cohesion)?

    Read the article

  • Massive Xubuntu desktop malfunction

    - by viktiglemma
    I'm using Xubuntu 11.04. Before everything worked fine, but when booting today the following happened: 1) Window focus does not leave the first-opened program. This means that if I keep Firefox open, and open a terminal, window focus will never be transferred to the terminal. (EDIT: if I open a terminal first, and then open Firefox, Firefox steals focus) 2) Window menus have disappeared. The maximize, minimize, etc., buttons and menu are gone. 3) In Xfce Settings Manager, the "Window Manager" settings window is empty. There is just a gray screen there, so I cannot modify any window settings. 4) The keyboard shortcuts I had previously defined using the Settings Manager do no longer work. Further, ALT-TAB no longer works for cycling between windows. 5) The mouse pointer does not show when I first log in. I have to log out and log in again (with an invisible pointer) before the mouse shows itself. EDIT: 6) I cannot resize or move the Thunderbird window, but I can move the Firefox window What can I do to troubleshoot this?

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Customer Experience Management for Retail 2.0 - part 2 / 2

    - by Sanjeev Sharma
    In the previous post, i discussed some of the key trends shaping up in the retail industry, their implications and the challenges facing retailers seeking to regain control of the buyer-seller relationship. Is Customer Experience Management the panacea for the ailing retailers who are now awakening to the power of the consumer? Quite honestly, customer acquisition, retention and satisfaction have been top of mind for retailers for quite some time now. The missing piece of this puzzle is bringing all those countless hours of strategy and planning to fruition. This is more of an execution gap than anything else. Although technology has made consumers more informed, more mobile and more social, customer experience is still largely defined by delivering on the following: Consistent experiences, whether shopping online or offline Personalize-able interaction ("mass market" sounds good as an internal strategy but not when you are a buyer!) Timely order fulfillment, if not pro-active notification of delays Below is a concept architecture for streamlining front-end, mid-office and back-end interfaces through shared process to achieve consistency and efficiency in managing the customer experience from order capture to order provisioning.

    Read the article

  • Parameterized Django models

    - by mgibsonbr
    In principle, a single Django application can be reused in two or more projects, providing functionality relevent to both. That implies that the same database structure (tables and relations) will be re-created identically in different databases, and most times this is not a problem (assuming the projects/databases are unrelated - for instance when someone downloads a complete app to use in their own projects). Sometimes, however, the models must be "tweaked" a little to better fit the problem needs. This can be accomplished by forking the app, but I wondered if there wouldn't be a better option in cases where the app designer can anticipate the most common customizations. For instance, if I have a model that could relate to another as one-to-one or one-to-many, I could specify the unique property as a parameter, that can be specified in the project's settings: class This(models.Model): other = models.ForeignKey(Other, unique=settings.OTHER_TO_THIS) Or if a model can relate to many others, I could create an intermediate table for each of them (thus enforcing referential integrity) instead of using generic fks: for related in settings.MODELS_RELATED_TO_OTHER: model_name = '%s_Other' % related globals()[model_name] = type(model_name, (models.Model,) { me:models.ForeignKey(find_model_class(related)), other:models.ForeignKey(Other), # Some other properties all intersection tables must have }) Etc. Let me stress out that I'm not proposing to change the models at runtime nor anything like that; once the parameters were defined and syncdb called for the first time, those parameters are not to be changed again (unless you're doing a schema migration). Is this a good design? Are there better ways to accomplish the same thing, or maybe drawbacks I coulnd't anticipate? This technique is meant to be used sparingly (only on apps meant to be reused in wildly different contexts, and only when a specific need of customization can be detected while the app model is being designed).

    Read the article

  • How system services are started in 12.10?

    - by Salem
    One thing that always confused me in Ubuntu was how system services are started. I know that Ubuntu uses Upstart and supports SysV, but which one is used to start the services? This matters when you want a "manual" start for a service. For example, on my system i have files for the following services either in /etc/init.d/<service> (Upstart) and /etc/init/<service>.conf (SysV): acpid, mysql, networking, qemu-kvm, ufw, libvirt-bin So if i want to disable MySQL execution at startup, i must use the Upstart way or the SysV way to disable it? Also, how can i tell which of those is really used to start a generic service? Edit The really doubt here is not how disable/enable services using SysV/Upstart. What really confuses me is that some services seem to be defined (and enabled) in SysV and Upstart at the same time. Is there any precedence between them (like if mysql is enabled in both launch it using SysV)? Or can it be the case that one tool uses the other in background?

    Read the article

  • Embedded Nashorn in JEditorPane

    - by Geertjan
    Here's a prototype for some kind of backoffice content management system. Several interesting goodies are included, such as an embedded JavaScript editor, as can be seen in the screenshot: Key items of interest in the above are as follows: Embedded JavaScript editor (i.e., the latest and greatest Nashorn technology, look it up, if you're not aware of what that is.) The way that's done is to include the relevant JavaScript modules in your NetBeans Platform application. Make very sure to include "Lexer to NetBeans Bridge", which does a bunch of critical stuff under the hood. The JEditorPane is defined as follows, along the lines that I blogged about recently thanks to Steven Yi: javaScriptPane.setContentType("text/javascript"); EditorKit kit = CloneableEditorSupport.getEditorKit("text/javascript"); javaScriptPane.setEditorKit(kit); javaScriptPane.getDocument().putProperty("mimeType", "text/javascript"); Note that "javaScriptPane" above is simply a JEditorPane. Timon Veenstra's excellent solution for integrating Nodes with MultiViewElements, which is described here by Timon, and nowhere else in the world. The tab you see above is within a pluggable container, so anyone else could create a new module and register their own MultiViewElement such that it will be incorporated into the editor. A small trick to ensure that only one window opens per news item: @NbBundle.Messages("OpenNews=Open") private class OpenNewsAction extends AbstractAction { public OpenNewsAction() { super(Bundle.OpenNews()); } @Override public void actionPerformed(ActionEvent e) { News news = getLookup().lookup(News.class); Mode editorMode = WindowManager.getDefault().findMode("editor"); for (TopComponent tc : WindowManager.getDefault().getOpenedTopComponents(editorMode)) { if (tc.getDisplayName().equals(news.getTitle())) { tc.requestActive(); return; } } TopComponent tc = MultiViews.createMultiView("application/x-newsnode", NewsNode.this); tc.open(); tc.requestActive(); } } The rest of what you see above is all standard NetBeans Platform stuff. The sources of everything you see above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/CMSBackOffice

    Read the article

  • HTG Explains: How Internet Explorer Saves Your Passwords and How to Manage Them

    - by Taylor Gibb
    Privacy is very important when it comes to the digital world, but do you know exactly how your browser saves your passwords ? Read on to find out what goes on behind the scenes. When it comes to web applications, there are many different types of authentication. One of the types is called basic authentication, which is when you navigate to website and a dialog box opens to ask for a username and password. This also happens to be the authentication mechanism defined in the RFC for HTTP. You can see in the screen shot above that there is a check box that you can use to remember your credentials, but what does that do ? You might also ask yourself what happens if you are not using Basic authentication. There is another type of authentication called Form authentication, this is when the authentication is built right into the web application, like the How-To Geek website. This allows the Developer to control the look and feel of the form that we use to log in. How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Unable to debug an encodded javascript?

    - by miles away
    I’m having some problems debugging an encoded javacscript. This script I’m referring to given in this link over here. The encoding here is simple and it works by shifting the unicodes values to whatever Codekey was use during encoding. The code that does the decoding is given here in plain English below:- <script language="javascript"> function dF(s){ var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++)t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } </script> I’m interested in knowing or understanding the values (e.g s1,t). Like for example when the value of i=0 what values would the following attributes / method would hold s1.charCodeAt(i) and s.substr(s.length-1,1) The reason I’m doing this is to understand as to how a CodeKey function really works. I don’t see anything in the code above which tells it to decode on the basis of codekey value. The only thing I can point in the encoding text is the last character which is set to 1 , 2 ,3 or 4 depending upon the codekey selected during encoding process. One can verify using the link I have given above. However, to debug, I’m using firebug addon with the script running as localhost on my wamp server. I’m able to put a breakpoint on the js using firebug but I’m unable to retrieve any of the user defined parameters or functions I mentioned above. I want to know under this context what would be best way to debug this encoded js.

    Read the article

  • Oops! installer misses a lib during OIF 11g install under some conditions

    - by user12674042
    If you installed OIF 11g on OEL 6.2 64bit and passed all the interesting gotchas but got stumped by this error in the WLS admin logs, and Enterprise Manager refuses to start correctly after what appeared to be a full successful install and configuration:  ...  <User defined listener oracle.sysman.eml.app.ContextInitializer failed: java.lang.NoClassDefFoundError: HTTPClient/ProtocolNotSuppException. java.lang.NoClassDefFoundError: HTTPClient/ProtocolNotSuppException     at oracle.sysman.eml.app.ContextInitializer.contextInitialized(ContextInitializer.java:1035) ... Caused By: java.lang.ClassNotFoundException: HTTPClient.ProtocolNotSuppException     at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:297)     at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:270) ... <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application 'em'. weblogic.application.ModuleException:     at weblogic.servlet.internal.WebAppModule.startContexts(WebAppModule.java:1520) ... The problem is the installer fails to properly place a required jar (http_client.jar) in the appropriate location for your WLS domain.   Assuming you have Oracle DB installed on the same server, just copy the jar to the lib folder in your domain, e.g. if your domain is IDMDomain and middleware install location is /u01/Middleware then cp /u01/app/oracle/product/11.2.0/db_1/oui/jlib/http_client.jar \   /u01/Middleware/user_projects/domains/IDMDomain/lib and restart your admin WLS.  Enterprise Manager will start to work.  Hopefully this will save others some time while on the bleeding edge...

    Read the article

  • Locale variables have no effect in remote shell (perl: warning: Setting locale failed.)

    - by Janning
    I have a fresh ubuntu 12.04 installation. When i connect to my remote server i got errors like this: ~$ ssh example.com sudo aptitude upgrade ... Traceback (most recent call last): File "/usr/bin/apt-listchanges", line 33, in <module> from ALChacks import * File "/usr/share/apt-listchanges/ALChacks.py", line 32, in <module> sys.stderr.write(_("Can't set locale; make sure $LC_* and $LANG are correct!\n")) NameError: name '_' is not defined perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "de_DE.UTF-8", LC_MONETARY = "de_DE.UTF-8", LC_ADDRESS = "de_DE.UTF-8", LC_TELEPHONE = "de_DE.UTF-8", LC_NAME = "de_DE.UTF-8", LC_MEASUREMENT = "de_DE.UTF-8", LC_IDENTIFICATION = "de_DE.UTF-8", LC_NUMERIC = "de_DE.UTF-8", LC_PAPER = "de_DE.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_ALL to default locale: No such file or directory No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. ... I don't have this problem when i connect from an older ubuntu installation. This is output from my ubuntu 12.04 installation, LANG and LANGUAGE are set $ locale LANG=de_DE.UTF-8 LANGUAGE=de_DE:en_GB:en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC=de_DE.UTF-8 LC_TIME=de_DE.UTF-8 LC_COLLATE="de_DE.UTF-8" LC_MONETARY=de_DE.UTF-8 LC_MESSAGES="de_DE.UTF-8" LC_PAPER=de_DE.UTF-8 LC_NAME=de_DE.UTF-8 LC_ADDRESS=de_DE.UTF-8 LC_TELEPHONE=de_DE.UTF-8 LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=de_DE.UTF-8 LC_ALL= Does anybody know what has changed in ubuntu to get this error message on remote servers?

    Read the article

  • Prefer class members or passing arguments between internal methods?

    - by geoffjentry
    Suppose within the private portion of a class there is a value which is utilized by multiple private methods. Do people prefer having this defined as a member variable for the class or passing it as an argument to each of the methods - and why? On one hand I could see an argument to be made that reducing state (ie member variables) in a class is generally a good thing, although if the same value is being repeatedly used throughout a class' methods it seems like that would be an ideal candidate for representation as state for the class to make the code visibly cleaner if nothing else. Edit: To clarify some of the comments/questions that were raised, I'm not talking about constants and this isn't relating to any particular case rather just a hypothetical that I was talking to some other people about. Ignoring the OOP angle for a moment, the particular use case that I had in mind was the following (assume pass by reference just to make the pseudocode cleaner) int x doSomething(x) doAnotherThing(x) doYetAnotherThing(x) doSomethingElse(x) So what I mean is that there's some variable that is common between multiple functions - in the case I had in mind it was due to chaining of smaller functions. In an OOP system, if these were all methods of a class (say due to refactoring via extracting methods from a large method), that variable could be passed around them all or it could be a class member.

    Read the article

  • C++: calling non-member functions with the same syntax of member ones

    - by peoro
    One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions: class A { }; void f( A & this ) { /* ... */ } // ... A a; a.f(); // this is the same as f(a); Of course this could only work as long as f is not virtual (since it cannot appear in A's virtual table. f doesn't need to access A's non-public members. f doesn't conflict with a function declared in A (A::f). I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits: calling str.strip() on a std::string (where strip is a function defined by the user) would sound a lot better than calling strip( str );. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1). My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)? Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff.

    Read the article

  • Essbase Analytics Link (EAL) - Performance of some operation of EAL could be improved by tuning of EAL Data Synchronization Server (DSS) parameters

    - by Ahmed Awan
    Generally, performance of some operation of EAL (Essbase Analytics Link) could be improved by tuning of EAL Data Synchronization Server (DSS) parameters. a. Expected that DSS machine will be 64-bit machine with 4-8 cores and 5-8 GB of RAM dedicated to DSS. b. To change DSS configuration - open EAL Configuration Tool on DSS machine.     ->Next:     and define: "Job Units" as <Number of Cores dedicated to DSS> * 1.5 "Max Memory Size" (if this is 64-bit machine) - ~1G for each Job Unit. If DSS machine is 32-bit - max memory size is 2600 MB. "Data Store Size" - depends on number of bridges and volume of HFM applications, but in most cases 50000 MB is enough. This volume should be available in defined "Data Store Dir" driver.   Continue with configuration and finish it. After that, DSS should be restarted to take new definitions.  

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >