SYS-CON MEDIA Authors: Pat Romanski, Liz McMillan, Yeshim Deniz, Elizabeth White, Courtney Abud

Blog Feed Post

Unix To PowerShell - Find

PowerShell_unix PowerShell is definitely gaining momentum in the windows scripting world but I still hear folks wanting to rely on Unix based tools to get their job done.  In this series of posts I’m going to look at converting some of the more popular Unix based tools to PowerShell.

find

The Unix “find” command searches through one or more directory trees of a file system, locating files based on some user specific criteria.  By default, find returns all files below the current working directory.  It also allows you to perform an action to be taken on each matched file.

In my PowerShell script I have only included the “file location” functions and will leave adding the action feature as an exercise for the reader.

This script starts out by calling the Do-Find function which basically just stuffs all the command line arguments into a hash table and calls the function Find-InDirectory with the given start location.  Find-InDirectory will get all child items in the specified location and then iterate through that list.  If the child item is a directory, the current depth is incremented, a recursive call to the Find-InDirectory is made for the child directory, and then the current depth is decremented.  If the child item is a file, the Get-IsMatch function is called to determine whether the file matches the specified criteria from the command line arguments.

The Unix parameters map to the following in my PowerShell script:

Unix PowerShell Description
path -start The directory to start the search from (default = “.”).
-maxdepth -maxdepth Descend at most “n” levels of directories below start path.
-mindepth -mindepth Do not apply tests at levels less than “n” levels below start path.
-amin -amin Only process files that were accessed more recently than “n” minutes ago.
-atime -atime Only process files that were accessed “n”*24 hours ago.
-empty -empty Only process empty files or directories.
-name -name Only process files where file name matches “name” pattern.
-path -path Only process files where the full path matches the “path” pattern.

 

   1: #----------------------------------------------------------------
   2: # Find.ps1
   3: #----------------------------------------------------------------
   4: param
   5: (
   6:   [string]$start = ".", # directory to start search in
   7:   [int]$maxdepth = -1, # decend at most "n" levels below starting line
   8:   [int]$mindepth = -1, # don't process levels less than mindepth
   9:   [int]$amin = -1, # file was last accessed "n" minutes ago
  10:   [int]$atime = -1, # file was last accessed "n"*24 hours ago
  11:   [bool]$empty = $false, # file is empty and is a file or directory.
  12:   [string]$name = "", # base of file name matches pattern.
  13:   [string]$path = "" # filename/path matches pattern.
  14: );
  15:  
  16: $script:CURRENT_DEPTH = 0;
  17:  
  18: #----------------------------------------------------------------
  19: # function Get-IsMatch
  20: #----------------------------------------------------------------
  21: function Get-IsMatch()
  22: {
  23:   param
  24:   (
  25:     $info = $null,
  26:     $context = $null
  27:   );
  28:   [bool]$bIsMatch = $true;
  29:   
  30:   if ( Is-InDepthRange -context $context )
  31:   {
  32:     if ( $context["name"].Length -gt 0 )
  33:     {
  34:       $bIsMatch = $info.Name -like $context["name"];
  35:     }
  36:     elseif ( $context["path"].Length -gt 0 )
  37:     {
  38:       $bIsMatch = $info.FullName -like $context["path"];
  39:     }
  40:     
  41:     if ( $bIsMatch -and ($context["amin"] -ne -1) )
  42:     {
  43:       $ts = [DateTime]::Now - $info.LastAccessTime;
  44:       if ( $ts.TotalMinutes -gt $context["amin"] )
  45:       {
  46:         $bIsMatch = $false;
  47:       }
  48:     }
  49:  
  50:     if ( $bIsMatch -and ($context["atime"] -ne -1) )
  51:     {
  52:       $ts = [DateTime]::Now - $info.LastAccessTime;
  53:       if ( $ts.TotalHours -gt (24 * $context["atime"]) )
  54:       {
  55:         $bIsMatch = $false;
  56:       }
  57:     }
  58:  
  59:     $bIsEmpty = $false;
  60:     if ( $info -is [System.IO.FileInfo] )
  61:     {
  62:       $bIsEmpty = $info.Length -eq 0;
  63:     }
  64:     elseif ( $info -is [System.IO.DirectoryInfo] )
  65:     {
  66:       $bIsEmpty = ( $info.GetFiles().Length -eq 0 );
  67:     }
  68:     if ( $context["empty"] ) { $bIsMatch = $bIsEmpty; }
  69:   }
  70:   else
  71:   {
  72:     $bIsMatch = $false;
  73:   }
  74:   
  75:   $bIsMatch;
  76: }
  77:  
  78: #----------------------------------------------------------------
  79: # function Is-InDepthRange
  80: #----------------------------------------------------------------
  81: function Is-InDepthRange()
  82: {
  83:   param($context = $null);
  84:   
  85:   $bInRange = $true;
  86:   if ( $context )
  87:   {
  88:     if ( -1 -ne $context["mindepth"] )
  89:     {
  90:       if ( $script:CURRENT_DEPTH -lt $context["mindepth"] )
  91:       {
  92:         $bInRange = $false;
  93:       }
  94:     }
  95:     if ( -1 -ne $context["maxdepth"] )
  96:     {
  97:       if ( $script:CURRENT_DEPTH -gt $context["maxdepth"] )
  98:       {
  99:         $bInRange = $false;
 100:       }
 101:     }
 102:   }
 103:   
 104:   $bInRange;
 105: }
 106:  
 107: #----------------------------------------------------------------
 108: # function Find-InDirectory
 109: #----------------------------------------------------------------
 110: function Find-InDirectory()
 111: {
 112:   param
 113:   (
 114:     [string]$location = ".", # directory to start search in
 115:     $context = $null
 116:   );
 117:   
 118:   $cis = Get-ChildItem -Path $location;
 119:   foreach ($ci in $cis)
 120:   {
 121:     if ( $ci.PSIsContainer )
 122:     {
 123:       if ( Get-IsMatch $ci -context $context )
 124:       {
 125:         $ci.FullName;
 126:       }
 127:       
 128:       # Recurse through directories
 129:       $script:CURRdENT_DEPTH++;
 130:       Find-InDirectory -location $ci.FullName -context $context;
 131:       $script:CURRENT_DEPTH--;
 132:     }
 133:     else
 134:     {
 135:       if ( Get-IsMatch $ci -context $context )
 136:       {
 137:         $ci.FullName;
 138:       }
 139:     }
 140:   }
 141:   
 142: }
 143:  
 144: #----------------------------------------------------------------
 145: # function Do-Find
 146: #----------------------------------------------------------------
 147: function Do-Find()
 148: {
 149:   param
 150:   (
 151:     [string]$start = ".",
 152:     [int]$maxdepth = -1,
 153:     [int]$mindepth = -1,
 154:     [int]$amin = -1,
 155:     [int]$atime = -1,
 156:     [bool]$empty = $false,
 157:     [string]$name = "",
 158:     [string]$path = ""
 159:   );
 160:   
 161:   $context = @{
 162:     "maxdepth" = $maxdepth; "mindepth" = $mindepth;
 163:     "amin" = $amin; "atime" = $atime;
 164:     "empty" = $empty; "name" = $name; "path" = $path};
 165:  
 166:   Find-InDirectory -location $start -context $context;
 167: }
 168:  
 169: Do-Find -start $start -maxdepth $maxdepth -mindepth $mindepth -amin $amin `
 170:   -atime $atime -empty $empty -name $name -path $path;

There are a few enhancements that could be made to this script such as better range checking to eliminate unnecessary directory recursion and also adding support for actions.

You can download the full source for the script here: Find.ps1

Read the original blog entry...

More Stories By Joe Pruitt

Joe Pruitt is a Principal Strategic Architect at F5 Networks working with Network and Software Architects to allow them to build network intelligence into their applications.

Latest Stories
As Apache Kafka has become increasingly ubiquitous in enterprise environments, it has become the defacto backbone of real-time data infrastructures. But as streaming clusters grow, integrating with various internal and external data sources has become increasingly challenging. Inspection, routing, aggregation, data capture, and management have all become time-consuming, expensive, poorly performing, or all of the above. Elements erases this burden by allowing customers to easily deploy fully man...
IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives. Serverless and Kubernetes are great examples of continuous, rapid pace of change in enterprise IT. They also raise a number of critical issues and questions about employee training, development processes, and opera...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
This month @nodexl announced that ServerlessSUMMIT & DevOpsSUMMIT own the world's top three most influential Kubernetes domains which are more influential than LinkedIn, Twitter, YouTube, Medium, Infoworld and Microsoft combined. NodeXL is a template for Microsoft® Excel® (2007, 2010, 2013 and 2016) on Windows (XP, Vista, 7, 8, 10) that lets you enter a network edge list into a workbook, click a button, see a network graph, and get a detailed summary report, all in the familiar environment of...
The Kubernetes vision is to democratize the building of distributed systems. As adoption of Kubernetes increases, the project is growing in popularity; it currently has more than 1,500 contributors who have made 62,000+ commits. Kubernetes acts as a cloud orchestration layer, reducing barriers to cloud adoption and eliminating vendor lock-in for enterprises wanting to use cloud service providers. Organizations can develop and run applications on any public cloud, such as Amazon Web Services, Mic...
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput. In this session, we'll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on ...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Technology has changed tremendously in the last 20 years. From onion architectures to APIs to microservices to cloud and containers, the technology artifacts shipped by teams has changed. And that's not all - roles have changed too. Functional silos have been replaced by cross-functional teams, the skill sets people need to have has been redefined and the tools and approaches for how software is developed and delivered has transformed. When we move from highly defined rigid roles and systems to ...
Implementation of Container Storage Interface (CSI) for Kubernetes delivers persistent storage for compute running in Kubernetes-managed containers. This future-proofs Kubernetes+Storage deployments. Unlike the Kubernetes Flexvol-based volume plugin, storage is no longer tightly coupled or dependent on Kubernetes releases. This creates greater stability because the storage interface is decoupled entirely from critical Kubernetes components allowing separation of privileges as CSI components do n...
With container technologies widely recognized as the cloud-era standard for workload scaling and application mobility, organizations are increasingly seeking to support container-based workflows. In particular, the desire to containerize a diverse spectrum of enterprise applications has highlighted the need for reliable, container-friendly, persistent storage. However, to effectively complement today's cloud-centric container orchestration platforms, persistent storage solutions must blend relia...
Applications with high availability requirements must be deployed to multiple clusters to ensure reliability. Historically, this has been done by pulling nodes from other availability zones into the same cluster. However, if the cluster failed, the application would still become unavailable. Rancher’s support for multi-cluster applications is a significant step forward, solving this problem by allowing users to select the application and the target clusters, providing cluster specific data. Ranc...
AI and machine learning disruption for Enterprises started happening in the areas such as IT operations management (ITOPs) and Cloud management and SaaS apps. In 2019 CIOs will see disruptive solutions for Cloud & Devops, AI/ML driven IT Ops and Cloud Ops. Customers want AI-driven multi-cloud operations for monitoring, detection, prevention of disruptions. Disruptions cause revenue loss, unhappy users, impacts brand reputation etc.
JFrog, the DevOps technology leader known for enabling liquid software via continuous update flows, was honored today with two prestigious awards as part of DevOps.com's annual DevOps Dozen. The awards recognized both JFrog Artifactory as the "Best DevOps Commercial Solution" and JFrog Co-Founder and CEO, Shlomi Ben Haim, as the "Best DevOps Solution Provider Executive". DevOps.com holds the DevOps Dozen awards annually to recognize the best of the best in the global DevOps marketplace.
Eggplant, the customer experience optimization specialist, announced the latest enhancements to its Digital Automation Intelligence (DAI) Suite. The new capabilities augment Eggplant’s continuous intelligent automation by making it simple and quick for teams to test the performance and usability of their products as well as basic functionality, delivering a better user experience that drives business outcomes.