Search Results

Search found 23568 results on 943 pages for 'select'.

Page 578/943 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Aironet 1200's Auto-Channel Feature: When should it be used?

    - by Josh Brower
    In our building we have around 25 1200 series Aironets, with a bit of overlap in some areas. Up until this point, we have had them deployed in alternating 1/6/11 channels, but we are wondering if we would get better performance if we used the auto-channel select feature. In looking around, I have seen comments that this feature should not be used as the WAP does a channel scan only on the radio startup, but I have not found this in any Cisco docs. Anybody have anymore information, or real-world experience with this feature? Thanks! -Josh

    Read the article

  • Segmentation fault while switching QCompleter for QLineEdit [on hold]

    - by san
    I have a QLineEdit that uses autocompletion one which on focusIn event in which it shows paths from XML List(here I have used hardcoded list) but if user doesn't find the path from that list popped by QCompleter than I want user to be able to browse to path typing '/' in QLineEdit , I am not able to select the paths say /Users etc and on trying to type Segmentation fault occurs. from PyQt4.Qt import Qt, QObject,QLineEdit from PyQt4.QtCore import pyqtSlot,SIGNAL,SLOT from PyQt4 import QtGui, QtCore import sys class DirLineEdit(QLineEdit, QtCore.QObject): """docstring for DirLineEdit""" def __init__(self): super(DirLineEdit, self).__init__() self.defaultList = ['~/Development/python/searchMethod', '~/Development/Nuke_python', '~/Development/python/openexr', '~/Development/python/cpp2python'] self.textChanged.connect(self.__dirCompleter) def focusInEvent(self, event): if len(self.text()) == 0: self._pathsList() QtGui.QLineEdit.focusInEvent(self, event) self.completer().complete() def __dirCompleter(self): if len(self.text()) == 0: model = MyListModel(self.defaultList, self) completer = QtGui.QCompleter(model, self) completer.setModel(model) else: dirModel = QtGui.QFileSystemModel() dirModel.setRootPath(QtCore.QDir.currentPath()) dirModel.setFilter(QtCore.QDir.AllDirs | QtCore.QDir.NoDotAndDotDot | QtCore.QDir.Files) dirModel.setNameFilterDisables(0) completer = QtGui.QCompleter(dirModel, self) completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) completer.setModel(dirModel) self.setCompleter(completer) def _pathsList(self): completerList = QtCore.QStringList() for i in self.defaultList: completerList.append(QtCore.QString(i)) lineEditCompleter = QtGui.QCompleter(completerList) lineEditCompleter.setCompletionMode(QtGui.QCompleter.UnfilteredPopupCompletion) self.setCompleter(lineEditCompleter) class MyListModel(QtCore.QAbstractListModel): def __init__(self, datain, parent=None, *args): """ datain: a list where each item is a row """ QtCore.QAbstractTableModel.__init__(self, parent, *args) self.listdata = datain def rowCount(self, parent=QtCore.QModelIndex()): return len(self.listdata) def data(self, index, role): if index.isValid() and role == QtCore.Qt.DisplayRole: return QtCore.QVariant(self.listdata[index.row()]) else: return QtCore.QVariant() app = QtGui.QApplication(sys.argv) smObj = DirLineEdit() smObj.show() app.exec_() Please help fix this or suggest better way of implementation?

    Read the article

  • udhcpc doesn't assign ip address

    - by Diab
    i have a board running linux 2.6.28 and i have one Ethernet interface (eth0) i want dhcp to assign dynamic ip to this interface. i have busybox with udhcpc in the file system and the kernel has the "Pack Socket" enabled so i copied the scripts from "busybox-1.14.1/examples/udhcp" to my board on "/etc/udhcpc/" (i created this directory) and when i run : ifconfig eth0 up the interface is up but without ip address, then running udhcpc -i eth0 -s /etc/udhcpc/sample.script i get the following: note : sample.script contains : "exec /etc/udhcpc/sample.$1" # udhcpc -i eth0 -s /etc/udhcpc/sample.script udhcpc (v1.14.1) started Sending discover... Sending select for 192.168.10.198... Lease of 192.168.10.198 obtained, lease time 691200 but when i check with ifconfig i can see that it didn't assign the ip address to eth0. anyone have an idea why udhcpc didn't assign the ip ? Thanx

    Read the article

  • Materialized View does not import properly when importing on a second instance of a database

    - by marinus
    When I import a database with materialized view mv_mt in just one database (Oracle) everything is ok. create materialized view mv_mt refresh complete next trunc( sysdate ) + 1 as SELECT sysdate, media_type.* from media_type; But when I try to import the same database to a copy in another schema I get the following errors: IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=438,WHAT='dbms_refresh.refresh(''"ALEXANDRA"" "."MV_MT"'');',NEXT_DATE=TO_DATE('2012-07-02:14:22:36','YYYY-MM-DD:HH24:MI:" "SS'),INTERVAL='sysdate + 1 / 24 / 60 / 6 ',NO_PARSE=TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 100 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23421: "BEGIN dbms_refresh.make('"ALEXANDRA"."MV_MT"',list=null,next_date=null," "interval=null,implicit_destroy=TRUE,lax=FALSE,job=438,rollback_seg=NUL" "L,push_deferred_rpc=TRUE,refresh_after_errors=FALSE,purge_option = 1,par" "allelism = 0,heap_size = 0); END;" IMP-00003: ORACLE error 23421 encountered ORA-23421: job number 438 is not a job in the job queue ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.DBMS_IJOB", line 793 ORA-06512: at "SYS.DBMS_REFRESH", line 86 ORA-06512: at "SYS.DBMS_REFRESH", line 62 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23410: "BEGIN dbms_refresh.add(name='"ALEXANDRA"."MV_MT"',list='"ALEXANDRA"."MV" "_MT"',siteid=0,export_db='ORCL01'); END;" IMP-00003: ORACLE error 23410 encountered ORA-23410: materialized view "ALEXANDRA"."MV_MT" is already in a refresh group ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.DBMS_IREFRESH", line 484 ORA-06512: at "SYS.DBMS_REFRESH", line 140 ORA-06512: at "SYS.DBMS_REFRESH", line 125 ORA-06512: at line 1 Anyone any ideas? Regards, Marinus

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • How to transfer Google Earth route to Google Maps?

    - by macias
    I have a GPX route, I imported it into Google Earth. Everything is fine, so I saved it as KMZ file. Then just for check, I imported KMZ back into Google Earth. No problem. The thing is, I would like to work with Google Maps, no Google Earth and I am not able to transfer this route into Google Maps. Each time I select "show in Google Maps", the view is switched from Earth to Maps, but my route is missing. If I use standalone web browser and try to import any of the files directly to Google Maps, either it falls into some infinite loop (I wait ~hour and still see progress bar) or Google Maps shows error. Thus the question: how to transfer a route from Google Earth to Google Maps? The size of GPX file is 3MB, the size of KMZ is 1MB.

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • Cisco, How to do a subnetting scheme using VLSM and RIP-2?

    - by Andrei T. Ursan
    I'm studying for my CCNA exam and I have to create a VLSM scheme using RIP-2 for the following requirements: (this is an exercise) Use the class C network 192.168.1.0 network for your point-to-point connections Using the Class A network 10.0.0.0, plan for the following number of hosts in each location: New York: 1000 Chicago: 500 Los Angeles: 1000 On the LAN and point-to-point connections, select subnet masks that use the smallest ranges of IP addresses possible given the above requirements. In all cases, use the lowest possible subnet numbers. Subnet zero is allowed. My guess is the following: New York: S0/0 192.168.1.1 /24 Fa0/0 10.1.0.1 netmask 255.255.248.0 - because we need 1000 hosts Chicago: S0/0 192.168.1.2 /24 Fa0/0 10.2.0.1 netmask 255.255.252.0 (for 500 hosts) Los Angeles: S0/0 192.168.2.3 /24 Fa0/0 10.3.0.1 netmask 255.255.248.0 (for 1000 hosts) Is this a good configuration? I'm reading the CCNA book but not everything is very clear, so I said to do some exercises... Thank you!

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Cannot get Backup Exec to backup Exchange.

    - by Shawn Gradwell
    I have a media server, Windows Server 2008 SP2 running Backup Exec 2010 R2. The SQL and other Windows agents work but I cannot backup the Exchange 2010 server running Windows Server2008 R2. I have the correct license for the Exchange agent - installed on the media server, and I installed Exchange Management tools on the media server. The 'Microsoft Exchange Database Availability Group' option is greyed out and if I select the server under a new backup job I can expand the 'Microsoft Information Store' option and see the mail database name but showing 0Kb. When I try to back it up it gives an error displaying: The job failed with the following error: Backup Exec attempted to back up an Exchange database according to the job settings. The database was not found, however. Update the selection list and run the job again.

    Read the article

  • Lossless cutting of MPEG TS files in Windows

    - by Sebastian P.R. Gingter
    I have several HD video files in transport stream (.ts) format, recorded with my satellite receiver. I want to cut them, as in simply remove a few minutes from the beginning, the end and sometimes a few minutes in the middle of it (remove early start of recordings, late ends and, for some seldom files, the ads). What is a good, ideally but not necessarily free, software with a GUI to do this? Best would be something where you could select points on a timeline and simply cut the elements out. As a resulting file, just the same .ts format would be great, but I could also live with putting the video contents into another container, as long as the video is NOT re-encoded / transcoded. The files have additional audio streams and subtitles. These should be retained in the process. My OS is Windows.

    Read the article

  • How to make pulseaudio and ubuntu detect the same audio device as alsa driver

    - by Kiwy
    I use Ubuntu 14.04 x64 and I use gnome-shell on my laptop. I have a Bose companion 5 (which is basically a USB sound system) and a HDMI port, both does work perfectly when I just boot with the cable plugin. However, when my laptop go to sleep or get unplugged from those two outputs, if I plug back the device, I end up without any hardware detection (only the built-in speakers) from pulse and gnome-shell sound output selector while if I use alsamixer, the device look up and ready. gstreamer-properties allow me to select and test effectively any device but while alsa recognize any device on the run, pulse is not capable of handling things correctly, my question is then: How can I make pulse detect and use the same hardware as alsa, or how to remove completely and gracefully pulseaudio (meaning volume applet running in gnome shell) I don't mind if the project implies to recompile half gnome shell if it implies those audio outputs work all the time. Pulse does not list my soundcard when I use command pactl list cards while the module plug&play for sound card is loaded in pactl list modules. I really don't know what to do, the behavior seems pretty random.

    Read the article

  • Windows 7 Phone Developer Tools CTP download

    - by mbcrump
    For those that don’t know, you can download the W7 Phone developer tools now. It is available here. I have installed it and wanted to share my experience so far. You can read the pre-release documentation here. First, here is what it comes with the install: The Windows Phone Developer Tools CTP includes the following: Visual Studio 2010 Express for Windows Phone CTP Windows Phone Emulator CTP Silverlight for Windows Phone CTP XNA 4.0 Game Studio CTP First impressions: No ISO image install (Bad for me because I use multiple machine and have to install from a bootstrapper. Its around 228mb download. I already have the VS2010 RC, but it still makes me install the VS2010 Express Edition. Windows Phone Emulator will only work with VS2010. No support for 05/08. Need at least a DX10 graphics compatible card. Final Word: (you are probably going to need this info) To start a new project, go to Installed Templates and select Silverlight for Windows Phone and Windows Phone Application. Use Silverlight for WPF style applications or XNA for W7 Games.

    Read the article

  • How can I find out which driver/file is being loaded when the system hangs during the Windows 7 boot

    - by user24247
    My desktop computer (1 OS, 1 drive, 1 partition) hangs during the Windows 7 boot process. When selecting F8 I can select Safe Boot which allows me to see the files processed during the boot process. I know that the last line displayed is the last file that was SUCCESFULLY loaded. How do I find out what the next line, and the potential candidate driver/file/program would have been? The unusual thing, at least in my experience, is that the freezing up of the system also happens when I boot from the Windows 7 install disks, which is preventing me from any repair options. With a failure of both, I cannot not restore Windows 7 to a previous date or uninstall drivers/programs that may be the cause of the hanging. Thanks for your responses. Marc

    Read the article

  • Read-only lock on a SharePoint site collection, or Why can't I edit anymore?

    - by PeterBrunone
    Monday morning, the calls started.  For some reason, long-time users were unable to edit list items.  I figured we had a permissions issue, so I popped in to look at the Site Settings -- and found that I couldn't.  A quick trip to Central Administration showed that I was still listed as a Site Collection Administrator, but I had no power at all on the site collection in question.A quick glance at the logs told me that the server had recently shut down unexpectedly (this is a Hyper-V virtual machine).  Apparently, in the confusion, somehow SharePoint decided to lock the site collection as Read Only.  This can be remedied in one of two ways:1)  In Central Administration, go to Application Management->SharePoint Site Management->Site collection quotas and locks.  Once you have arrived, select the correct application and site collection, and you will have the opportunity to view and set the lock status of the collection (it most likely will be set to "Read-only", and you'll want to move that radio button to "Not locked").2)  Fire up stsadm and issue the following command:stsadm -o setsitelock -url http://myportalsitecollection -lock none

    Read the article

  • Do you know about the Visual Studio 2010 Architecture Guidance?

    - by Martin Hinshelwood
    If you have not seen the Visual Studio 2010 Architectural Guidance from the Visual Studio ALM Rangers then you are missing out. I have been spelunking the TFS Guidance recently and I discovered the Visual Studio 2010 Architectural Guidance. This is not an in-depth look at the capabilities of the architectural tools that shipped with Visual Studio 2010 Ultimate, but is instead a set of samples that lead you by example through real world scenarios. There is practical guidance and checklists to help guide lead developers and architects through the common challenges in understanding both existing and new applications. The content concentrates on practical guidance for Visual Studio 2010 Ultimate and is focused on modelling tools. There is integration into Visual Studio so all you need to do to access it is select “Architecture | Visual Studio ALM Rangers – Architecture Guidance”. Figure: Accessing the Architecture guidance is easy This brings up an inline version of the documentation and a kind of Explorer that lets you pick the tasks you want to perform and takes you strait to that part of the Guidance. Figure: Access the Guidance from right within Visual Studio 2010 This is a big help when you just want to figure out how to do something and can’t be bothered searching for and through the content in the provided Word documents. The Question and Answer section is full of useful content and there are six Hands-On-Labs to sink your teeth into: Creating extensions with the feature extension Explore an Existing System Scenario Extensibility Layer Diagrams New Solution Scenario Reusable Architecture Scenario Validation an Architecture Scenario I’m sold! Where can i get my hands on this fantastic content? Download the Visual Studio 2010 Architecture Tooling Guidance and if you like it don’t forget to add a review to make the team that put it together in their spare time feel all the mere loved.

    Read the article

  • Script for run script

    - by user31568
    Hello everyone. There is script: Dim WSHShell, WinDir, Value, wshProcEnv, fso, Spath Set WSHShell = CreateObject("WScript.Shell") Dim objFSO, objFileCopy Dim strFilePath, strDestination Const OverwriteExisting = True Set objFSO = CreateObject("Scripting.FileSystemObject") Set windir = objFSO.getspecialfolder(0) objFSO.CopyFile "\dv.rt.ru\SYSVOL\DV.RT.RU\scripts\shutdown.vbs", windir&"\", OverwriteExisting strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\" _ & strComputer & "\root\cimv2") JobID = "1" Set colScheduledJobs = objWMIService.ExecQuery _ ("Select * from Win32_ScheduledJob") For Each objJob in colScheduledJobs objJob.Delete Next Set objNewJob = objWMIService.Get("Win32_ScheduledJob") errJobCreate = objNewJob.Create _ (windir & "\shutdown.vbs", "**093000.000000+660", _ True, 1 OR 2 OR 4 OR 8 OR 16 OR 32 OR 64, ,True, JobId) How make that shutdown.vbs run not at 9:00 once, but run for 9:00 to 12:00 Thanks

    Read the article

  • What could be causing frequent display freezes?

    - by austen
    I just installed Ubuntu 14.04 two days ago (coming from Win8) and in the two days that I've been using it, my display has frozen four or five times. The mouse won't move but the keyboard does respond so I can use the Ctrl+Alt+Bkspc command to fix it. It seems like it might just be the display freezing because one of the times I was watching a Youtube video and the audio continued playing. I have an Nvidia graphics card with the most recent Nvidia drivers for it enabled. I see that a lot of questions about Ubuntu freezing get marked as a duplicate and pretty much always linked back to a thread about what to do when it freezes. Clearly, I've got that bit figured out already and I did read that thread for further advice. What I'm looking for though is how to fix this permanently. output from lspci -nnk | grep -iA2 VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) Subsystem: Lenovo Device [17aa:2200] Kernel driver in use: i915 Update: JohnnyEnglish pointed out that Ubuntu is using the integrated graphics, not my Nvidia card. It turns out my laptop uses Nvidia Optimus and I cannot enable only the graphics card through the BIOS. I found out about Nvidia Prime and got it set up using this article. The settings panel which allows you to select the graphics says that 'performance mode' is enabled but when I check which graphics controller is enabled through the terminal, it still says it's using the integrated graphics. I'm not sure if this could be causing the freezes but I guess it's a starting point. Any ideas on how to resolve this?

    Read the article

  • Redhat 6 gui installation VS kickstart gives me different packages?

    - by jonaz
    If i do the graphical install and select basic server + aide and screen i get a system with 535 installed packages. If i look at the /root/anaconda-ks.cfg file in that freshly installed system i see: %packages @base @console-internet @core @debugging @directory-client @hardware-monitoring @java-platform @large-systems @network-file-system-client @performance @perl-runtime @security-tools @server-platform @server-policy @system-admin-tools pax python-dmidecode oddjob sgpio certmonger pam_krb5 krb5-workstation nscd pam_ldap nss-pam-ldapd perl-DBD-SQLite aide screen If i then install a NEW system using a kickstart only containing those packages i get 620 installed packages. So basicly my question is why does the system install almost 100 more packages when using kickstart compared to the GUI installation when the exact same packagegroups are selected?

    Read the article

  • Tweaking log4net Settings Programmatically

    - by PSteele
    A few months ago, I had to dynamically add a log4net appender at runtime.  Now I find myself in another log4net situation.  I need to modify the configuration of my appenders at runtime. My client requires all files generated by our applications to be saved to a specific location.  This location is determined at runtime.  Therefore, I want my FileAppenders to log their data to this specific location – but I won't know the location until runtime so I can't add it to the XML configuration file I'm using. No problem.  Bing is my new friend and returned a couple of hits.  I made a few tweaks to their LINQ queries and created a generic extension method for ILoggerRepository (just a hunch that I might want this functionality somewhere else in the future – sorry YAGNI fans): public static void ModifyAppenders<T>(this ILoggerRepository repository, Action<T> modify) where T:log4net.Appender.AppenderSkeleton { var appenders = from appender in log4net.LogManager.GetRepository().GetAppenders() where appender is T select appender as T;   foreach (var appender in appenders) { modify(appender); appender.ActivateOptions(); } } Now I can easily add the proper directory prefix to all of my FileAppenders at runtime: log4net.LogManager.GetRepository().ModifyAppenders<FileAppender>(a => { a.File = Path.Combine(settings.ConfigDirectory, Path.GetFileName(a.File)); }); Thanks beefycode and Wil Peck. Technorati Tags: .NET,log4net,LINQ

    Read the article

  • I'm stuck on User Defined Session destop environment

    - by Dan
    I just installed Ubuntu for the first time dual boot so I get to choose Ubuntu or windows. I then changed the setting where is doesn't ask for my password when booting up. I then installed Edubuntu desktop package. I then hit system and logged out that way i could be at the loggin screen that also lets you select the desktop environment. Edubuntu was not there but User defined session was so i clicked that thinking that might be Edubuntu and logged in. Now im totally stuck. Only walpaper on the screen as i realize now that is normal for user defined session but there is no log out button to change desktop environments now and since I set it to not ask for password at boot up there is no option to change it at start up. If i hit ctrl+alt+del it only lets you shutdown, restart, suspend, or hybernate.... no logg out. I have hit every key on the keybourd hoping something will pop up. I thought this must be a simple noob mistake that there must be endless artiles about this so did searches on google and forums and was shocked to find nothing about this. My next step unless someone can help is to uninstall and reinstall.

    Read the article

  • Ad-hoc String Manipulation With Visual Studio

    - by Liam McLennan
    Visual studio supports relatively advanced string manipulation via the ‘Quick Replace’ dialog. Today I had a requirement to modify some html, replacing line breaks with unordered list items. For example, I need to convert: Infrastructure<br/> Energy<br/> Industrial development<br/> Urban growth<br/> Water<br/> Food security<br/> to: <li>Infrastructure</li> <li>Energy</li> <li>Industrial development</li> <li>Urban growth</li> <li>Water</li> <li>Food security</li> This cannot be done with a simple search-and-replace but it can be done using the Quick Replace regular expression support. To use regular expressions expand ‘Find Options’, check ‘Use:’ and select ‘Regular Expressions’ Typically, Visual Studio regular expressions use a different syntax to every other regular expression engine. We need to use a capturing group to grab the text of each line so that it can be included in the replacement. The syntax for a capturing group is to replace the part of the expression to be captured with { and }. So my regular expression: {.*}\<br/\> means capture all the characters before <br/>. Note that < and > have to be escaped with \. In the replacement expression we can use \1 to insert the previously captured text. If the search expression had a second capturing group then its text would be available in \2 and so on. Visual Studio’s quick replace feature can be scoped to a selection, the current document, all open documents or every document in the current solution.

    Read the article

  • Sharing Internet Connection in Windows 7 is so much more frustrated than Windows XP

    - by Phuong Nguyen
    Back to the time of Windows XP, from Properties dialog of my Wireless Connection, I can enable sharing and then select LAN network from the Drop Down List and boom, I can share it with my friend. We just need a LAN cable (either cross or not-cross is OK) and his Laptop will get an auto IP to gain access to internet. But now with the new Windows 7, everything starts to suck. I cannot see the Drop Down List any more in the sharing panel and my friends Laptop cannot get an automatic IP anymore. Am I doing anything wrong over there? How can I gain back the peace I used to have with Windows XP?

    Read the article

  • AutoVue 20.2 for Agile Released

    - by Angus Graham
    Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Oracle’s AutoVue 20.2 for Agile PLM is now available on Oracle’s Software Delivery Cloud. This latest release allows Agile PLM customers to take advantage of new AutoVue 20.2 features in the following Agile PLM environments: 9.3.1.x; 9.3.0.  AutoVue 20.2 delivers improvements in the following areas. New Format Support: AutoVue 20.2 adds support for the latest versions of popular file formats including: ECAD: Cadence Concept HDL 16.5, Allegro Layout 16.5, Orcad Capture 16.5, Board Station ASCII Symbol Geometry, Cadence Cell Library MCAD: CATIA V5 R21, PTC Creo Parametric 1.0, Creo Element\Direct Modeling 17.10, 17.20, 17.25, 17.30, 18.00, SolidWorks 2012, SolidEdge ST3 & ST4, PLM XML 2D CAD: Creo Element/Direct Drafting 17.10 to 18.00 Office: MS Office 2010: Word, Excel, PowerPoint, Outlook Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards Updated version of AutoVue Document Print Service offerings, which include the ability to select CAD layers for printing  For further details, check out the What’s New in AutoVue 20.2 datasheet

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >