Search Results

Search found 11734 results on 470 pages for 'refer to itself'.

Page 29/470 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Why DbContext object shouldn't be referred in Service Layer?

    - by nazmoonnoor
    I've been looking for some implementations of Service Layer and Controller interaction in blogs and in some open source projects. All of them seem to refer DbContext object in repository classes but avoided to use in service classes. Service classes essentially using a IQueryable<T> references of DbSet<T>. I want to know why this practice is good and why DbContext shouldn't have a reference in Service Layer.

    Read the article

  • Readying Yourself For SEO

    And all of this is what influences the page rankings of a website. Page rankings are a ranking system that determines how high up your website will appear in the search results. Most people who browse the internet only refer to the first and second page when searching for files of any sort.

    Read the article

  • How to plan a PHP based project with DB involved in the below scenario? [closed]

    - by San
    I'm starting a project on web monitoring where other websites can be monitored. Recently, I have found codeIgniter, yii, kohana frameworks online, but I'm confused as to whether to choose any of those or start directly. Moreover, this is my first big project that I'm planning for. So can anyone give me suggestions on how to start, how to plan, what books to refer to, to start this kind of web application and share some links to understand for myself on how to work on this project?

    Read the article

  • Single API Architecture

    - by user1901686
    When people refer to an architecture that involves a single service API that all clients talk to (a client can be an iPad app, etc), what is the "client" for the web app -- is it A) the web browser itself. Thus, the entire app is written in html/css/javascript and ajax calls to the service are made to fetch data and changes are made through javascript or B) you have an MVC-like stack on a server, only instead of the controllers calling to the model layer directly, they call to the service API which return models that are used to render the traditional views or C) something else?

    Read the article

  • Mission Critical: SQL Server Upgrade

    You're faced with the task of doing a SQL Server Upgrade? Do you know all the steps, and the right order to do them? You do? Even with interruptions and distractions? Maybe, but it is wise to be able to refer to the Mission-Critical Task checklist.

    Read the article

  • SEO Content to Deliver Value to Your Customers

    When most people talk about SEO content they refer to stuff that is written merely to please the search engines but when the readers go through it, they find nothing but a whole lot of crap. With such content you might be able to get visitors to your site however it will be very difficult to keep them on the site for long because once they read the badly organized content they'll go away. This would mean that you conversion rate would be pretty low, which ultimately signals that you are not making enough of sales.

    Read the article

  • SQL SERVER – Sends backups to a Network Folder, FTP Server, Dropbox, Google Drive or Amazon S3

    - by pinaldave
    Let me tell you about one of the most useful SQL tools that every DBA should use – it is SQLBackupAndFTP. I have been using this tool since 2009 – and it is the first program I install on a SQL server. Download a free version, 1 minute configuration and your daily backups are safe in the cloud. In summary, SQLBackupAndFTP Creates SQL Server database and file backups on schedule Compresses and encrypts the backups Sends backups to a network folder, FTP Server, Dropbox, Google Drive or Amazon S3 Sends email notifications of job’s success or failure SQLBackupAndFTP comes in Free and Paid versions (starting from $29) – see version comparison. Free version is fully functional for unlimited ad hoc backups or for scheduled backups of up to two databases – it will be sufficient for many small customers. What has impressed me from the beginning – is that I understood how it works and was able to configure the job from a single form (see Image 1 – Main form above) Connect to you SQL server and select databases to be backed up Click “Add backup destination” to configure where backups should go to (network, FTP Server, Dropbox, Google Drive or Amazon S3) Enter your email to receive email confirmations Set the time to start daily full backups (or go to Settings if you need Differential or  Transaction Log backups on a flexible schedule) Press “Run Now” button to test You can get to this form if you click “Settings” buttons in the “Schedule section”. Select what types of backups and how often you want to run them and you will see the scheduled backups in the “Estimated backup plan” list A detailed tutorial is available on the developer’s website. Along with SQLBackupAndFTP setup gives you the option to install “One-Click SQL Restore” (you can install it stand-alone too) – a basic tool for restoring just Full backups. However basic, you can drag-and-drop on it the zip file created by SQLBackupAndFTP, it unzips the BAK file if necessary, connects to the SQL server on the start, selects the right database, it is smart enough to restart the server to drop open connections if necessary – very handy for developers who need to restore databases often. You may ask why is this tool is better than maintenance tasks available in SQL Server? While maintenance tasks are easy to set up, SQLBackupAndFTP is still way easier and integrates solution for compression, encryption, FTP, cloud storage and email which make it superior to maintenance tasks in every aspect. On a flip side SQLBackupAndFTP is not the fanciest tool to manage backups or check their health. It only works reliably on local SQL Server instances. In other words it has to be installed on the SQL server itself. For remote servers it uses scripting which is less reliable. This limitations is actually inherent in SQL server itself as BACKUP DATABASE command  creates backup not on the client, but on the server itself. This tool is compatible with almost all the known SQL Server versions. It works with SQL Server 2008 (all versions) and many of the previous versions. It is especially useful for SQL Server Express 2005 and SQL Server Express 2008, as they lack built in tools for backup. I strongly recommend this tool to all the DBAs. They must absolutely try it as it is free and does exactly what it promises. You can download your free copy of the tool from here. Please share your experience about using this tool. I am eager to receive your feedback regarding this article. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, SQLServer, T SQL, Technology

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • Class design issue

    - by user2865206
    I'm new to OOP and a lot of times I become stumped in situations similar to this example: Task: Generate an XML document that contains information about a person. Assume the information is readily available in a database. Here is an example of the structure: <Person> <Name>John Doe</Name> <Age>21</Age> <Address> <Street>100 Main St.</Street> <City>Sylvania</City> <State>OH</State> </Address> <Relatives> <Parents> <Mother> <Name>Jane Doe</Name> </Mother> <Father> <Name>John Doe Sr.</Name> </Father> </Parents> <Siblings> <Brother> <Name>Jeff Doe</Name> </Brother> <Brother> <Name>Steven Doe</Name> </Brother> </Siblings> </Relatives> </Person> Ok lets create a class for each tag (ie: Person, Name, Age, Address) Lets assume each class is only responsible for itself and the elements directly contained Each class will know (have defined by default) the classes that are directly contained within them Each class will have a process() function that will add itself and its childeren to the XML document we are creating When a child is drawn, as in the previous line, we will have them call process() as well Now we are in a recursive loop where each object draws their childeren until all are drawn But what if only some of the tags need to be drawn, and the rest are optional? Some are optional based on if the data exists (if we have it, we must draw it), and some are optional based on the preferences of the user generating the document How do we make sure each object has the data it needs to draw itself and it's childeren? We can pass down a massive array through every object, but that seems shitty doesnt it? We could have each object query the database for it, but thats a lot of queries, and how does it know what it's query is? What if we want to get rid of a tag later? There is no way to reference them. I've been thinking about this for 20 hours now. I feel like I am misunderstanding a design principle or am just approaching this all wrong. How would you go about programming something like this? I suppose this problem could apply to any senario where there are classes that create other classes, but the classes created need information to run. How do I get the information to them in a way that doesn't seem fucky? Thanks for all of your time, this has been kicking my ass.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Excel cannot access the file with IIS7&Windows Serer 2008 R2(64bit)

    - by user838204
    I have a web project(.Net4) that needs to access Excel file, but it ends up with the following error message: Error occured during file generation.Microsoft Excel cannot access the file 'D:\xx\xx\abc.xls'. There are several possible reasons: • The file name or path does not exist. (Actually it's there) • The file is being used by another program.(It cant happen) • The workbook you are trying to save has the same name as a currently open workbook. In IIS7, I use DefaultAppPool with the Identity "myservice" who's under the Group of Administrators. In the Authentication Page of my website under IIS, Anonymous Authentication was enabled and set to "Application pool identity" and ASP.NET Impersonation was disabled. After searching the solution for hours, I found the following but NONE of them work Create folder in C:\Windows\SysWOW64\config\systemprofile\Desktop. Plz refer:this Grant rights of "myservice" in Component Services. Plz refer:this One thing strange, there is nothing in the Group of IIS_IUSRS. Is that normal? Cause I remember at least two users (DefaultAppPool & Classic .Net AppPool). Plz tell me how to fix the access problem. I assume that's permission problem of IIS but I cant solve it. Thank you.

    Read the article

  • SQL Server 2008 Express + Reporting Services on Windows 7

    - by TimothyP
    I'm trying to install SQL Server Express 2008 and Reporting Services on a x64 Windows 7 Machine for development purposes. I've installed SQL Server 2008 Express with the Microsoft Web Platform Installer I had to manually enable the SQL Server Browser in the Sql Server Configuration Manager and tried to enable the SQL Server Agent but that simply doesn't work. Keeps throwing an RPC error: "The remote procedure call failed. [0x800706be]". The start mode is set to Disabled and I cannot change it. Even though I selected the SQL Server Express with advanced services in the web platform installer I could not find any reference to SQL Server Reporting Services so I used the SQL Server Installation Center x64 application to "upgrade" to SQL Server Express 2008 with advanced services... this installed many things but still I couldn't find any reference to SQL Server Reporting Services other than an application called: "Reporting Services Configuration Manager" This opens up a dialog called "Reporting Services Configuration Connection" which is asking for a server name (shows the name of my machine) and a Find button. When I click the find button I get: "Unable to connect to the Reporting Server WMI provider. Details: Invalid Namespace". I found some references on the web to solve this problem, but they refer to a directory: "%ProgamFiles%\Microsoft SQL Server\MSRS10.SQL2008\Reporting Services\" which does not exist anywhere on my system. (The directories for SQL Server are there, but there is no Reporting Services directory anywhere). What am I doing wrong here? Wasn't the web platform installer supposed to handle all this? Thnx for any advice. PS: Most google results refer to 2005 vs 2008 problems, but I never had 2005 installed on this system, it's a newly installed development machine.

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • A tale of two user ids: Why does NFS not recognize a new user id?

    - by user76177
    I have two servers running RHEL6. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's id on client is 502. appuser's id on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed id of user in /etc/passwd on client to be 506 **Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client or done anything else yet.

    Read the article

  • Why doesn't NFS recognize a new UID?

    - by user76177
    I have two servers running RHEL6. I have root access to both. The main server, which I will refer to as server, is a database server. The application server, which I will refer to as client, mounts a directory from server via NFS. There is a user, appuser, on both client and server. However, appuser's UID on client is 502. appuser's UID on server is 506. Both users need read and write capability on the NFS share. To facilitate this, I made the share owned by appuser on server. Running id appuser on each yields: uid=506(appuser). Of course, client does not recognize that ownership, since appuser has a different id on client. So I did the following: Changed UID of user in /etc/passwd on client to be 506. Changed ownership of appuser's $HOME on client to be appuser again so that I could log in. Now, when I go to look at the NFS share from the client side, I see that it is owned by 502. 502 is the OLD id for appuser on client. I can't change ownership of the NFS share from client, since that is a volume that physically resides on server. I need to make sure that the NFS share shows ownership of appuser from both server and client. What step have I missed since changing the appuser id on client? NOTE: I have not rebooted client (or anything else.)

    Read the article

  • Proper configuration for Windows SMTP Virtual Server to only send email from localhost, and tracking down source of spam emails

    - by ilasno
    We manage a server that is hosted on Amazon EC2, which has web applications that need to be able to send outgoing email. Recently we received a notice from Amazon about possible email abuse on that server, so i've been looking into it. It's Windows Server Datacenter (2003, i guess), and uses SMTP Virtual Server (you know, the one that requires IIS 6 for admin). The settings on the Access tab are as follows: - Authentication: Anonymous - Connection: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) - Relay: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) In the SMTP logs there are many entries like the following: 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 FROM: 0 0 4 0 26364 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 TO: 0 0 4 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26707 SMTP - - - - ([email protected] is sending quite a lot of emails :-/) Can anyone confirm if the SMTP server settings seem correct? I'm also wondering if a web application on the machine could be exposing a contact form or something that would allow this sort of abuse, looking into that (and how to look into that) further.

    Read the article

  • Putting and configuring grub on an external drive

    - by HisDudeness
    I want to put a bunch of minor emergency operating systems (such as GParted Live, DSL, Puppy Linux and so on) on a partitioned USB pen drive, with a dedicated grub boot loader on it, which I want to start when I select the drive in the BIOS to boot. The problem is: when writing grub boot options I must tell where kernel and initial ram files are located, but the USB drive can have different letters depending on when I did plug it in, if some others external drive were mounted. So, how can I write appropriate options which automatically refer to the drive grub is installed on without having to specify absolute paths, which might change (I mean, like (hd1,msdos1) ot /dev/sdb1)? And, while we are at it, can I have grub working on a device without an operating system on it to which it can refer? I mean, I want to address the command sudo grub-install /dev/sdb from the LMDE system I'm on right now, but I won't have LMDE on my pen drive. Is that a problem? And installing grub on another device, will I keep the grub I have right now on my HDD?

    Read the article

  • Is there any way to get an ExtJS GridPanel to automatically resize its width, but still be contained

    - by Micah
    I want to include an ExtJS GridPanel inside a larger layout, which in turn must be rendered inside a particular div in some pre-existing HTML that I don't control. From my experiments, it appears that the GridPanel only resizes itself correctly if it's within a Viewport. For instance, with this code the GridPanel automatically resizes: new Ext.Viewport( { layout: 'anchor', items: [ { xtype: 'panel', title: 'foo', layout: 'fit', items: [ { xtype: 'grid', // define the grid here... but if I replace the first three lines with the lines below, it doesn't: new Ext.Panel( { layout: 'anchor', renderTo: 'RenderUntoThisDiv', The trouble is, Viewport always renders directly to the body of the HTML document, and I need to render within a particular div. If there is a way to get the GridPanel to resize itself correctly, despite not being contained in a ViewPort, that would be ideal. If not, if I could get the Viewport to render the elements within the div, I'd be fine with that. All of my ExtJS objects can be contained within the same div. Does anybody know of a way to get a GridPanel to resize itself correctly, but still be contained inside some non-ExtJS-generated HTML?

    Read the article

  • Label in PyQt4 GUI not updating with every loop of FOR loop

    - by user297920
    I'm having a problem, where I wish to run several command line functions from a python program using a GUI. I don't know if my problem is specific to PyQt4 or if it has to do with my bad use of python code. What I wish to do is have a label on my GUI change its text value to inform the user which command is being executed. My problem however, arises when I run several commands using a for loop. I would like the label to update itself with every loop, however, the program is not updating the GUI label with every loop, instead, it only updates itself once the entire loop is completed, and displays only the last command that was executed. I am using PyQt4 for my GUI environment. And I have established that the text variable for the label is indeed being updated with every loop, but, it is not actually showing up visually in the GUI. Is there a way for me to force the label to update itself? I have tried the update() and repaint() methods within the loop, but they don't make any difference. I would really appreciate any help. Thank you. Ronny. Here is the code I am using: # -*- coding: utf-8 -*- import sys, os from PyQt4 import QtGui, QtCore Gui = QtGui Core = QtCore # ================================================== CREATE WINDOW OBJECT CLASS class Win(Gui.QWidget): def __init__(self, parent = None): Gui.QWidget.__init__(self, parent) # --------------------------------------------------- SETUP PLAY BUTTON self.but1 = Gui.QPushButton("Run Commands",self) self.but1.setGeometry(10,10, 200, 100) # -------------------------------------------------------- SETUP LABELS self.label1 = Gui.QLabel("No Commands running", self) self.label1.move(10, 120) # ------------------------------------------------------- SETUP ACTIONS self.connect(self.but1, Core.SIGNAL("clicked()"), runCommands) # ======================================================= RUN COMMAND FUNCTION def runCommands(): for i in commands: win.label1.setText(i) # Make label display the command being run print win.label1.text() # This shows that the value is actually # changing with every loop, but its just not # being reflected in the GUI label os.system(i) # ======================================================================== MAIN # ------------------------------------------------------ THE TERMINAL COMMANDS com1 = "espeak 'senntence 1'" com2 = "espeak 'senntence 2'" com3 = "espeak 'senntence 3'" com4 = "espeak 'senntence 4'" com5 = "espeak 'senntence 5'" commands = (com1, com2, com3, com4, com5) # --------------------------------------------------- SETUP THE GUI ENVIRONMENT app = Gui.QApplication(sys.argv) win = Win() win.show() sys.exit(app.exec_())

    Read the article

  • Any Other Ideas for prototyping..

    - by davehamptonusa
    I've used Douglass Crockford's Object.beget, but augmented it slightly to: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { if (spec.hasOwnProperty(node)) { that[node] = spec[node]; } } return that; }; This way you can "beget" and augment in one fell swoop. var fop = Object.spawn(bar, { a: 'fast', b: 'prototyping' }); In English that means, "Make me a new object called 'fop' with 'bar' as its prototype, but change or add the members 'a' and 'b'. You can even nest it the spec to prototype deeper elements, should you choose. var fop = Object.spawn(bar, { a: 'fast', b: Object.spawn(quux,{ farple: 'deep' }), c: 'prototyping' }); This can help avoid hopping into an object's prototype unintentionally in a long object name like: foo.bar.quux.peanut = 'farple'; If quux is part of the prototype and not foo's own object, your change to 'peanut' will actually change the protoype, affecting all objects prototyped by foo's prototype object. But I digress... My question is this. Because your spec can itself be another object and that object could itself have properties from it's prototype in your new object - and you may want those properties...(at least you should be aware of them before you decided to use it as a spec)... I want to be able to grab all of the elements from all of the spec's prototype chain, except for the prototype object itself... This would flatten them into the new object. Should I use: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { that[node] = spec[node]; } that.prototype = o; return that; }; I would love thoughts and suggestions...

    Read the article

  • JsonParseException on Valid JSON

    - by user2909602
    I am having an issue calling a RESTful service from my client code. I have written the RESTful service using CXF/Jackson, deployed to localhost, and tested using RESTClient successfully. Below is a snippet of the service code: @POST @Produces("application/json") @Consumes("application/json") @Path("/set/mood") public Response setMood(MoodMeter mm) { this.getMmDAO().insert(mm); return Response.ok().entity(mm).build(); } The model class and dao work successfully and the service itself works fine using RESTClient. However, when I attempt to call this service from Java Script, I get the error below on the server side: Caused by: org.codehaus.jackson.JsonParseException: Unexpected character ('m' (code 109)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') I have copied the client side code below. To make sure it has nothing to do with the JSON data itself, I used a valid JSON string (which works using RESTClient, JSON.parse() method, and JSONLint) in the vars 'json' (string) and 'jsonData' (JSON). Below is the Java Script code: var json = '{"mood_value":8,"mood_comments":"new comments","user_id":5,"point":{"latitude":37.292929,"longitude":38.0323323},"created_dtm":1381546869260}'; var jsonData = JSON.parse(json); $.ajax({ url: 'http://localhost:8080/moodmeter/app/service/set/mood', dataType: 'json', data: jsonData, type: "POST", contentType: "application/json" }); I've seen the JsonParseException a number of times on other threads, but in this case the JSON itself appears to be valid (and tested). Any thoughts are appreciated.

    Read the article

  • Silverlight: how to use a scroll viewer to wrap a list view without specifying height?

    - by John Nicholas
    I have a control that has a list that varies in length greatly. This control appears in various places meaning that i cannot calculate its position and desired height easily. Moreover all I want is for the scrollviewer to simply size itself according to its parent. currently it insists on sizing itself according to the content. currently when i have a list that exceeds the height of the screen the whole control extends off the bottom and the scrollviewer shows no bar (because it has stretched to the heigth of the contents and so thinks it is not required). I've not included code as the object graph is fairly deep. What i am looking for is a set of conditions that would cause the scrollviewer to resize itself according to its content rather than its parent. I have it working in a similar situation involving grids and datagrids, the unique part of this control is that there is a list containing controls. Any ideas? I would prefer solutions that don't require use of code behind - but im really not in a position to be choosey.

    Read the article

  • How to log correct context with Threadpool threads using log4net?

    - by myotherme
    I am trying to find a way to log useful context from a bunch of threads. The problem is that a lot of code is dealt with on Events that are arriving via threadpool threads (as far as I can tell) so their names are not in relation to any context. The problem can be demonstrated with the following code: class Program { private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); static void Main(string[] args) { new Thread(TestThis).Start("ThreadA"); new Thread(TestThis).Start("ThreadB"); Console.ReadLine(); } private static void TestThis(object name) { var nameStr = (string)name; Thread.CurrentThread.Name = nameStr; log4net.ThreadContext.Properties["ThreadContext"] = nameStr; log4net.LogicalThreadContext.Properties["LogicalThreadContext"] = nameStr; log.Debug("From Thread itself"); ThreadPool.QueueUserWorkItem(x => log.Debug("From threadpool Thread: " + nameStr)); } } The Conversion pattern is: %date [%thread] %-5level %logger [%property] - %message%newline The output is like so: 2010-05-21 15:08:02,357 [ThreadA] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadA, log4net:HostName=xxx, ThreadContext=ThreadA}] - From Thread itself 2010-05-21 15:08:02,357 [ThreadB] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadB, log4net:HostName=xxx, ThreadContext=ThreadB}] - From Thread itself 2010-05-21 15:08:02,404 [7] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadA 2010-05-21 15:08:02,420 [16] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadB As you can see the last two rows have no Names of useful information to distinguish the 2 threads, other than manually adding the name to the message (which I want to avoid). How can I get the Name/Context into the log for the threadpool threads without adding it to the message at every call?

    Read the article

  • Insane Graphics.lineStyle behavior

    - by Simon
    Hi all, I'd like some help with a little project of mine. Background: i have a little hierarchy of Sprite derived classes (5 levels starting from the one, that is the root application class in Flex Builder). Width and Height properties are overriden so that my class always remembers it's requested size (not just bounding size around content) and also those properties explicitly set scaleX and scaleY to 1, so that no scaling would ever be involved. After storing those values, draw() method is called to redraw content. Drawing: Drawing is very straight forward. Only the deepest object (at 1-indexed level 5) draws something into this.graphics object like this: var gr:Graphics = this.graphics; gr.clear(); gr.lineStyle(0, this.borderColor, 1, true, LineScaleMode.NONE); gr.beginFill(0x0000CC); gr.drawRoundRectComplex(0, 0, this.width, this.height, 10, 10, 0, 0); gr.endFill(); Further on: There is also MouseEvent.MOUSE_WHEEL event attached to the parent of the object that draws. What handler does is simply resizes that drawing object. Problem: Screenshot When resizing sometimes that hairline border line with LineScaleMode.NONE set gains thickness (quite often even 10 px) + it quite often leaves a trail of itself (as seen in the picture above and below blue box (notice that box itself has one px black border)). When i set lineStile thickness to NaN or alpha to 0, that trail is no more happening. I've been coming back to this problem and dropping it for some other stuff for over a week now. Any ideas anyone? P.S. Grey background is that of Flash Player itself, not my own choise.. :D

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >